00:00:00.001 Started by upstream project "autotest-per-patch" build number 132409 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.101 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.102 The recommended git tool is: git 00:00:00.102 using credential 00000000-0000-0000-0000-000000000002 00:00:00.104 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.160 Fetching changes from the remote Git repository 00:00:00.162 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.217 Using shallow fetch with depth 1 00:00:00.217 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.217 > git --version # timeout=10 00:00:00.276 > git --version # 'git version 2.39.2' 00:00:00.276 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.314 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.314 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.854 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.866 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.880 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.880 > git config core.sparsecheckout # timeout=10 00:00:06.892 > git read-tree -mu HEAD # timeout=10 00:00:06.909 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.932 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.932 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.199 [Pipeline] Start of Pipeline 00:00:07.213 [Pipeline] library 00:00:07.214 Loading library shm_lib@master 00:00:07.215 Library shm_lib@master is cached. Copying from home. 00:00:07.230 [Pipeline] node 00:00:07.240 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.242 [Pipeline] { 00:00:07.248 [Pipeline] catchError 00:00:07.249 [Pipeline] { 00:00:07.257 [Pipeline] wrap 00:00:07.263 [Pipeline] { 00:00:07.270 [Pipeline] stage 00:00:07.271 [Pipeline] { (Prologue) 00:00:07.455 [Pipeline] sh 00:00:07.738 + logger -p user.info -t JENKINS-CI 00:00:07.753 [Pipeline] echo 00:00:07.754 Node: WFP6 00:00:07.760 [Pipeline] sh 00:00:08.056 [Pipeline] setCustomBuildProperty 00:00:08.069 [Pipeline] echo 00:00:08.071 Cleanup processes 00:00:08.077 [Pipeline] sh 00:00:08.358 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.358 1642654 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.371 [Pipeline] sh 00:00:08.659 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.659 ++ grep -v 'sudo pgrep' 00:00:08.659 ++ awk '{print $1}' 00:00:08.659 + sudo kill -9 00:00:08.659 + true 00:00:08.675 [Pipeline] cleanWs 00:00:08.686 [WS-CLEANUP] Deleting project workspace... 00:00:08.686 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.693 [WS-CLEANUP] done 00:00:08.699 [Pipeline] setCustomBuildProperty 00:00:08.718 [Pipeline] sh 00:00:09.003 + sudo git config --global --replace-all safe.directory '*' 00:00:09.107 [Pipeline] httpRequest 00:00:09.505 [Pipeline] echo 00:00:09.507 Sorcerer 10.211.164.20 is alive 00:00:09.520 [Pipeline] retry 00:00:09.523 [Pipeline] { 00:00:09.541 [Pipeline] httpRequest 00:00:09.546 HttpMethod: GET 00:00:09.546 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.546 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.558 Response Code: HTTP/1.1 200 OK 00:00:09.558 Success: Status code 200 is in the accepted range: 200,404 00:00:09.559 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.014 [Pipeline] } 00:00:13.032 [Pipeline] // retry 00:00:13.039 [Pipeline] sh 00:00:13.318 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.335 [Pipeline] httpRequest 00:00:13.726 [Pipeline] echo 00:00:13.728 Sorcerer 10.211.164.20 is alive 00:00:13.738 [Pipeline] retry 00:00:13.740 [Pipeline] { 00:00:13.756 [Pipeline] httpRequest 00:00:13.760 HttpMethod: GET 00:00:13.761 URL: http://10.211.164.20/packages/spdk_66a383faf48f77307ce1e2288d88bc9207b66d98.tar.gz 00:00:13.761 Sending request to url: http://10.211.164.20/packages/spdk_66a383faf48f77307ce1e2288d88bc9207b66d98.tar.gz 00:00:13.777 Response Code: HTTP/1.1 200 OK 00:00:13.777 Success: Status code 200 is in the accepted range: 200,404 00:00:13.778 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_66a383faf48f77307ce1e2288d88bc9207b66d98.tar.gz 00:00:56.281 [Pipeline] } 00:00:56.305 [Pipeline] // retry 00:00:56.315 [Pipeline] sh 00:00:56.598 + tar --no-same-owner -xf spdk_66a383faf48f77307ce1e2288d88bc9207b66d98.tar.gz 00:00:59.146 [Pipeline] sh 00:00:59.430 + git -C spdk log --oneline -n5 00:00:59.430 66a383faf bdevperf: Get metadata config by not bdev but bdev_desc 00:00:59.430 25916e30c bdevperf: Store the result of DIF type check into job structure 00:00:59.430 bd9804982 bdevperf: g_main_thread calls bdev_open() instead of job->thread 00:00:59.430 2e015e34f bdevperf: Remove TAILQ_REMOVE which may result in potential memory leak 00:00:59.430 aae11995f bdev/malloc: Fix unexpected DIF verification error for initial read 00:00:59.441 [Pipeline] } 00:00:59.455 [Pipeline] // stage 00:00:59.464 [Pipeline] stage 00:00:59.467 [Pipeline] { (Prepare) 00:00:59.483 [Pipeline] writeFile 00:00:59.499 [Pipeline] sh 00:00:59.781 + logger -p user.info -t JENKINS-CI 00:00:59.792 [Pipeline] sh 00:01:00.071 + logger -p user.info -t JENKINS-CI 00:01:00.084 [Pipeline] sh 00:01:00.368 + cat autorun-spdk.conf 00:01:00.368 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.368 SPDK_TEST_NVMF=1 00:01:00.368 SPDK_TEST_NVME_CLI=1 00:01:00.368 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.368 SPDK_TEST_NVMF_NICS=e810 00:01:00.368 SPDK_TEST_VFIOUSER=1 00:01:00.368 SPDK_RUN_UBSAN=1 00:01:00.368 NET_TYPE=phy 00:01:00.375 RUN_NIGHTLY=0 00:01:00.381 [Pipeline] readFile 00:01:00.407 [Pipeline] withEnv 00:01:00.409 [Pipeline] { 00:01:00.426 [Pipeline] sh 00:01:00.713 + set -ex 00:01:00.713 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:00.713 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:00.713 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.713 ++ SPDK_TEST_NVMF=1 00:01:00.713 ++ SPDK_TEST_NVME_CLI=1 00:01:00.713 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.713 ++ SPDK_TEST_NVMF_NICS=e810 00:01:00.713 ++ SPDK_TEST_VFIOUSER=1 00:01:00.713 ++ SPDK_RUN_UBSAN=1 00:01:00.713 ++ NET_TYPE=phy 00:01:00.713 ++ RUN_NIGHTLY=0 00:01:00.713 + case $SPDK_TEST_NVMF_NICS in 00:01:00.713 + DRIVERS=ice 00:01:00.713 + [[ tcp == \r\d\m\a ]] 00:01:00.713 + [[ -n ice ]] 00:01:00.713 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:00.713 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:00.713 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:00.713 rmmod: ERROR: Module irdma is not currently loaded 00:01:00.713 rmmod: ERROR: Module i40iw is not currently loaded 00:01:00.713 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:00.713 + true 00:01:00.713 + for D in $DRIVERS 00:01:00.713 + sudo modprobe ice 00:01:00.713 + exit 0 00:01:00.722 [Pipeline] } 00:01:00.738 [Pipeline] // withEnv 00:01:00.745 [Pipeline] } 00:01:00.760 [Pipeline] // stage 00:01:00.770 [Pipeline] catchError 00:01:00.772 [Pipeline] { 00:01:00.785 [Pipeline] timeout 00:01:00.786 Timeout set to expire in 1 hr 0 min 00:01:00.787 [Pipeline] { 00:01:00.802 [Pipeline] stage 00:01:00.804 [Pipeline] { (Tests) 00:01:00.820 [Pipeline] sh 00:01:01.176 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.176 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.176 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.176 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:01.176 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.176 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:01.176 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:01.176 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:01.176 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:01.176 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:01.176 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:01.176 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.176 + source /etc/os-release 00:01:01.176 ++ NAME='Fedora Linux' 00:01:01.176 ++ VERSION='39 (Cloud Edition)' 00:01:01.176 ++ ID=fedora 00:01:01.176 ++ VERSION_ID=39 00:01:01.176 ++ VERSION_CODENAME= 00:01:01.176 ++ PLATFORM_ID=platform:f39 00:01:01.176 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:01.176 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:01.176 ++ LOGO=fedora-logo-icon 00:01:01.176 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:01.176 ++ HOME_URL=https://fedoraproject.org/ 00:01:01.176 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:01.176 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:01.176 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:01.176 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:01.176 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:01.176 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:01.176 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:01.176 ++ SUPPORT_END=2024-11-12 00:01:01.176 ++ VARIANT='Cloud Edition' 00:01:01.176 ++ VARIANT_ID=cloud 00:01:01.176 + uname -a 00:01:01.176 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:01.176 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:03.712 Hugepages 00:01:03.712 node hugesize free / total 00:01:03.712 node0 1048576kB 0 / 0 00:01:03.712 node0 2048kB 0 / 0 00:01:03.712 node1 1048576kB 0 / 0 00:01:03.712 node1 2048kB 0 / 0 00:01:03.712 00:01:03.712 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:03.712 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:03.712 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:03.712 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:03.712 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:03.712 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:03.712 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:03.712 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:03.712 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:03.712 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:03.712 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:03.712 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:03.712 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:03.712 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:03.712 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:03.712 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:03.712 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:03.712 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:03.712 + rm -f /tmp/spdk-ld-path 00:01:03.712 + source autorun-spdk.conf 00:01:03.712 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.712 ++ SPDK_TEST_NVMF=1 00:01:03.712 ++ SPDK_TEST_NVME_CLI=1 00:01:03.712 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.712 ++ SPDK_TEST_NVMF_NICS=e810 00:01:03.712 ++ SPDK_TEST_VFIOUSER=1 00:01:03.712 ++ SPDK_RUN_UBSAN=1 00:01:03.712 ++ NET_TYPE=phy 00:01:03.712 ++ RUN_NIGHTLY=0 00:01:03.712 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:03.712 + [[ -n '' ]] 00:01:03.712 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.712 + for M in /var/spdk/build-*-manifest.txt 00:01:03.712 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:03.712 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:03.712 + for M in /var/spdk/build-*-manifest.txt 00:01:03.712 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:03.712 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:03.712 + for M in /var/spdk/build-*-manifest.txt 00:01:03.712 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:03.712 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:03.712 ++ uname 00:01:03.712 + [[ Linux == \L\i\n\u\x ]] 00:01:03.712 + sudo dmesg -T 00:01:03.712 + sudo dmesg --clear 00:01:03.972 + dmesg_pid=1643580 00:01:03.972 + [[ Fedora Linux == FreeBSD ]] 00:01:03.972 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:03.972 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:03.972 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:03.972 + [[ -x /usr/src/fio-static/fio ]] 00:01:03.972 + export FIO_BIN=/usr/src/fio-static/fio 00:01:03.972 + FIO_BIN=/usr/src/fio-static/fio 00:01:03.972 + sudo dmesg -Tw 00:01:03.972 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:03.972 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:03.972 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:03.972 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:03.972 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:03.972 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:03.972 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:03.972 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:03.972 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:03.972 16:02:35 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:03.972 16:02:35 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:03.972 16:02:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.972 16:02:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:03.972 16:02:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:03.972 16:02:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.972 16:02:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:03.972 16:02:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:03.972 16:02:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:03.972 16:02:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:03.972 16:02:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:03.972 16:02:35 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:03.972 16:02:35 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:03.972 16:02:35 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:03.972 16:02:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:03.972 16:02:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:03.972 16:02:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:03.972 16:02:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:03.972 16:02:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:03.972 16:02:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.972 16:02:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.972 16:02:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.972 16:02:35 -- paths/export.sh@5 -- $ export PATH 00:01:03.972 16:02:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.972 16:02:35 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:03.972 16:02:35 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:03.972 16:02:35 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732114955.XXXXXX 00:01:03.972 16:02:35 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732114955.h3S8BR 00:01:03.972 16:02:35 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:03.972 16:02:35 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:03.972 16:02:35 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:03.972 16:02:35 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:03.972 16:02:35 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:03.972 16:02:35 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:03.972 16:02:35 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:03.972 16:02:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.972 16:02:35 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:03.972 16:02:35 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:03.972 16:02:35 -- pm/common@17 -- $ local monitor 00:01:03.972 16:02:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.972 16:02:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.972 16:02:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.972 16:02:35 -- pm/common@21 -- $ date +%s 00:01:03.972 16:02:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.972 16:02:35 -- pm/common@21 -- $ date +%s 00:01:03.972 16:02:35 -- pm/common@25 -- $ sleep 1 00:01:03.972 16:02:35 -- pm/common@21 -- $ date +%s 00:01:03.972 16:02:35 -- pm/common@21 -- $ date +%s 00:01:03.972 16:02:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732114955 00:01:03.972 16:02:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732114955 00:01:03.972 16:02:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732114955 00:01:03.972 16:02:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732114955 00:01:03.973 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732114955_collect-cpu-load.pm.log 00:01:03.973 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732114955_collect-vmstat.pm.log 00:01:03.973 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732114955_collect-cpu-temp.pm.log 00:01:03.973 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732114955_collect-bmc-pm.bmc.pm.log 00:01:04.911 16:02:36 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:04.911 16:02:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:04.911 16:02:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:04.911 16:02:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.911 16:02:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:04.911 Wed Nov 20 03:02:36 PM UTC 2024 00:01:04.911 16:02:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:05.171 v25.01-pre-238-g66a383faf 00:01:05.171 16:02:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:05.171 16:02:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:05.171 16:02:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:05.171 16:02:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:05.171 16:02:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:05.171 16:02:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.171 ************************************ 00:01:05.171 START TEST ubsan 00:01:05.171 ************************************ 00:01:05.171 16:02:36 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:05.171 using ubsan 00:01:05.171 00:01:05.171 real 0m0.000s 00:01:05.171 user 0m0.000s 00:01:05.171 sys 0m0.000s 00:01:05.171 16:02:36 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:05.171 16:02:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:05.171 ************************************ 00:01:05.171 END TEST ubsan 00:01:05.171 ************************************ 00:01:05.171 16:02:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:05.171 16:02:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:05.171 16:02:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:05.171 16:02:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:05.171 16:02:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:05.171 16:02:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:05.171 16:02:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:05.171 16:02:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:05.172 16:02:36 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:05.172 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:05.172 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:05.740 Using 'verbs' RDMA provider 00:01:18.518 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:30.729 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:30.729 Creating mk/config.mk...done. 00:01:30.729 Creating mk/cc.flags.mk...done. 00:01:30.729 Type 'make' to build. 00:01:30.729 16:03:01 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:30.729 16:03:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:30.729 16:03:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:30.729 16:03:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.729 ************************************ 00:01:30.729 START TEST make 00:01:30.729 ************************************ 00:01:30.729 16:03:01 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:30.987 make[1]: Nothing to be done for 'all'. 00:01:32.374 The Meson build system 00:01:32.374 Version: 1.5.0 00:01:32.374 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:32.374 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.374 Build type: native build 00:01:32.374 Project name: libvfio-user 00:01:32.374 Project version: 0.0.1 00:01:32.374 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:32.374 C linker for the host machine: cc ld.bfd 2.40-14 00:01:32.374 Host machine cpu family: x86_64 00:01:32.374 Host machine cpu: x86_64 00:01:32.374 Run-time dependency threads found: YES 00:01:32.374 Library dl found: YES 00:01:32.374 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:32.374 Run-time dependency json-c found: YES 0.17 00:01:32.374 Run-time dependency cmocka found: YES 1.1.7 00:01:32.374 Program pytest-3 found: NO 00:01:32.374 Program flake8 found: NO 00:01:32.374 Program misspell-fixer found: NO 00:01:32.374 Program restructuredtext-lint found: NO 00:01:32.374 Program valgrind found: YES (/usr/bin/valgrind) 00:01:32.374 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:32.374 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:32.374 Compiler for C supports arguments -Wwrite-strings: YES 00:01:32.374 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:32.374 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:32.374 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:32.374 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:32.374 Build targets in project: 8 00:01:32.374 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:32.374 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:32.374 00:01:32.374 libvfio-user 0.0.1 00:01:32.374 00:01:32.374 User defined options 00:01:32.374 buildtype : debug 00:01:32.374 default_library: shared 00:01:32.374 libdir : /usr/local/lib 00:01:32.374 00:01:32.374 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:32.942 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.942 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:32.942 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:32.942 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:32.942 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:32.942 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:33.200 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:33.200 [7/37] Compiling C object samples/null.p/null.c.o 00:01:33.200 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:33.200 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:33.200 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:33.200 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:33.200 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:33.200 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:33.200 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:33.200 [15/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:33.200 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:33.200 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:33.200 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:33.200 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:33.200 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:33.200 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:33.200 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:33.200 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:33.200 [24/37] Compiling C object samples/server.p/server.c.o 00:01:33.200 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:33.200 [26/37] Compiling C object samples/client.p/client.c.o 00:01:33.200 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:33.200 [28/37] Linking target samples/client 00:01:33.200 [29/37] Linking target test/unit_tests 00:01:33.200 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:33.200 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:33.459 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:33.459 [33/37] Linking target samples/null 00:01:33.459 [34/37] Linking target samples/lspci 00:01:33.459 [35/37] Linking target samples/server 00:01:33.459 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:33.459 [37/37] Linking target samples/gpio-pci-idio-16 00:01:33.459 INFO: autodetecting backend as ninja 00:01:33.459 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:33.459 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:34.026 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:34.026 ninja: no work to do. 00:01:39.297 The Meson build system 00:01:39.297 Version: 1.5.0 00:01:39.297 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:39.297 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:39.297 Build type: native build 00:01:39.297 Program cat found: YES (/usr/bin/cat) 00:01:39.297 Project name: DPDK 00:01:39.297 Project version: 24.03.0 00:01:39.297 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:39.297 C linker for the host machine: cc ld.bfd 2.40-14 00:01:39.297 Host machine cpu family: x86_64 00:01:39.297 Host machine cpu: x86_64 00:01:39.297 Message: ## Building in Developer Mode ## 00:01:39.297 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:39.297 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:39.297 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:39.297 Program python3 found: YES (/usr/bin/python3) 00:01:39.297 Program cat found: YES (/usr/bin/cat) 00:01:39.297 Compiler for C supports arguments -march=native: YES 00:01:39.297 Checking for size of "void *" : 8 00:01:39.297 Checking for size of "void *" : 8 (cached) 00:01:39.297 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:39.297 Library m found: YES 00:01:39.297 Library numa found: YES 00:01:39.297 Has header "numaif.h" : YES 00:01:39.297 Library fdt found: NO 00:01:39.297 Library execinfo found: NO 00:01:39.297 Has header "execinfo.h" : YES 00:01:39.297 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:39.297 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:39.297 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:39.297 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:39.297 Run-time dependency openssl found: YES 3.1.1 00:01:39.297 Run-time dependency libpcap found: YES 1.10.4 00:01:39.297 Has header "pcap.h" with dependency libpcap: YES 00:01:39.297 Compiler for C supports arguments -Wcast-qual: YES 00:01:39.297 Compiler for C supports arguments -Wdeprecated: YES 00:01:39.297 Compiler for C supports arguments -Wformat: YES 00:01:39.297 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:39.297 Compiler for C supports arguments -Wformat-security: NO 00:01:39.297 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.297 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:39.297 Compiler for C supports arguments -Wnested-externs: YES 00:01:39.297 Compiler for C supports arguments -Wold-style-definition: YES 00:01:39.297 Compiler for C supports arguments -Wpointer-arith: YES 00:01:39.297 Compiler for C supports arguments -Wsign-compare: YES 00:01:39.297 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:39.297 Compiler for C supports arguments -Wundef: YES 00:01:39.297 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.297 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:39.297 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:39.297 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.298 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:39.298 Program objdump found: YES (/usr/bin/objdump) 00:01:39.298 Compiler for C supports arguments -mavx512f: YES 00:01:39.298 Checking if "AVX512 checking" compiles: YES 00:01:39.298 Fetching value of define "__SSE4_2__" : 1 00:01:39.298 Fetching value of define "__AES__" : 1 00:01:39.298 Fetching value of define "__AVX__" : 1 00:01:39.298 Fetching value of define "__AVX2__" : 1 00:01:39.298 Fetching value of define "__AVX512BW__" : 1 00:01:39.298 Fetching value of define "__AVX512CD__" : 1 00:01:39.298 Fetching value of define "__AVX512DQ__" : 1 00:01:39.298 Fetching value of define "__AVX512F__" : 1 00:01:39.298 Fetching value of define "__AVX512VL__" : 1 00:01:39.298 Fetching value of define "__PCLMUL__" : 1 00:01:39.298 Fetching value of define "__RDRND__" : 1 00:01:39.298 Fetching value of define "__RDSEED__" : 1 00:01:39.298 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:39.298 Fetching value of define "__znver1__" : (undefined) 00:01:39.298 Fetching value of define "__znver2__" : (undefined) 00:01:39.298 Fetching value of define "__znver3__" : (undefined) 00:01:39.298 Fetching value of define "__znver4__" : (undefined) 00:01:39.298 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:39.298 Message: lib/log: Defining dependency "log" 00:01:39.298 Message: lib/kvargs: Defining dependency "kvargs" 00:01:39.298 Message: lib/telemetry: Defining dependency "telemetry" 00:01:39.298 Checking for function "getentropy" : NO 00:01:39.298 Message: lib/eal: Defining dependency "eal" 00:01:39.298 Message: lib/ring: Defining dependency "ring" 00:01:39.298 Message: lib/rcu: Defining dependency "rcu" 00:01:39.298 Message: lib/mempool: Defining dependency "mempool" 00:01:39.298 Message: lib/mbuf: Defining dependency "mbuf" 00:01:39.298 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:39.298 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:39.298 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:39.298 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:39.298 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:39.298 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:39.298 Compiler for C supports arguments -mpclmul: YES 00:01:39.298 Compiler for C supports arguments -maes: YES 00:01:39.298 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:39.298 Compiler for C supports arguments -mavx512bw: YES 00:01:39.298 Compiler for C supports arguments -mavx512dq: YES 00:01:39.298 Compiler for C supports arguments -mavx512vl: YES 00:01:39.298 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:39.298 Compiler for C supports arguments -mavx2: YES 00:01:39.298 Compiler for C supports arguments -mavx: YES 00:01:39.298 Message: lib/net: Defining dependency "net" 00:01:39.298 Message: lib/meter: Defining dependency "meter" 00:01:39.298 Message: lib/ethdev: Defining dependency "ethdev" 00:01:39.298 Message: lib/pci: Defining dependency "pci" 00:01:39.298 Message: lib/cmdline: Defining dependency "cmdline" 00:01:39.298 Message: lib/hash: Defining dependency "hash" 00:01:39.298 Message: lib/timer: Defining dependency "timer" 00:01:39.298 Message: lib/compressdev: Defining dependency "compressdev" 00:01:39.298 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:39.298 Message: lib/dmadev: Defining dependency "dmadev" 00:01:39.298 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:39.298 Message: lib/power: Defining dependency "power" 00:01:39.298 Message: lib/reorder: Defining dependency "reorder" 00:01:39.298 Message: lib/security: Defining dependency "security" 00:01:39.298 Has header "linux/userfaultfd.h" : YES 00:01:39.298 Has header "linux/vduse.h" : YES 00:01:39.298 Message: lib/vhost: Defining dependency "vhost" 00:01:39.298 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:39.298 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:39.298 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:39.298 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:39.298 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:39.298 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:39.298 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:39.298 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:39.298 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:39.298 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:39.298 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:39.298 Configuring doxy-api-html.conf using configuration 00:01:39.298 Configuring doxy-api-man.conf using configuration 00:01:39.298 Program mandb found: YES (/usr/bin/mandb) 00:01:39.298 Program sphinx-build found: NO 00:01:39.298 Configuring rte_build_config.h using configuration 00:01:39.298 Message: 00:01:39.298 ================= 00:01:39.298 Applications Enabled 00:01:39.298 ================= 00:01:39.298 00:01:39.298 apps: 00:01:39.298 00:01:39.298 00:01:39.298 Message: 00:01:39.298 ================= 00:01:39.298 Libraries Enabled 00:01:39.298 ================= 00:01:39.298 00:01:39.298 libs: 00:01:39.298 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:39.298 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:39.298 cryptodev, dmadev, power, reorder, security, vhost, 00:01:39.298 00:01:39.298 Message: 00:01:39.298 =============== 00:01:39.298 Drivers Enabled 00:01:39.298 =============== 00:01:39.298 00:01:39.298 common: 00:01:39.298 00:01:39.298 bus: 00:01:39.298 pci, vdev, 00:01:39.298 mempool: 00:01:39.298 ring, 00:01:39.298 dma: 00:01:39.298 00:01:39.298 net: 00:01:39.298 00:01:39.298 crypto: 00:01:39.298 00:01:39.298 compress: 00:01:39.298 00:01:39.298 vdpa: 00:01:39.298 00:01:39.298 00:01:39.298 Message: 00:01:39.298 ================= 00:01:39.298 Content Skipped 00:01:39.298 ================= 00:01:39.298 00:01:39.298 apps: 00:01:39.298 dumpcap: explicitly disabled via build config 00:01:39.298 graph: explicitly disabled via build config 00:01:39.298 pdump: explicitly disabled via build config 00:01:39.298 proc-info: explicitly disabled via build config 00:01:39.298 test-acl: explicitly disabled via build config 00:01:39.298 test-bbdev: explicitly disabled via build config 00:01:39.298 test-cmdline: explicitly disabled via build config 00:01:39.298 test-compress-perf: explicitly disabled via build config 00:01:39.298 test-crypto-perf: explicitly disabled via build config 00:01:39.298 test-dma-perf: explicitly disabled via build config 00:01:39.298 test-eventdev: explicitly disabled via build config 00:01:39.298 test-fib: explicitly disabled via build config 00:01:39.298 test-flow-perf: explicitly disabled via build config 00:01:39.298 test-gpudev: explicitly disabled via build config 00:01:39.298 test-mldev: explicitly disabled via build config 00:01:39.298 test-pipeline: explicitly disabled via build config 00:01:39.298 test-pmd: explicitly disabled via build config 00:01:39.298 test-regex: explicitly disabled via build config 00:01:39.298 test-sad: explicitly disabled via build config 00:01:39.298 test-security-perf: explicitly disabled via build config 00:01:39.298 00:01:39.298 libs: 00:01:39.298 argparse: explicitly disabled via build config 00:01:39.298 metrics: explicitly disabled via build config 00:01:39.298 acl: explicitly disabled via build config 00:01:39.298 bbdev: explicitly disabled via build config 00:01:39.298 bitratestats: explicitly disabled via build config 00:01:39.298 bpf: explicitly disabled via build config 00:01:39.298 cfgfile: explicitly disabled via build config 00:01:39.298 distributor: explicitly disabled via build config 00:01:39.298 efd: explicitly disabled via build config 00:01:39.298 eventdev: explicitly disabled via build config 00:01:39.298 dispatcher: explicitly disabled via build config 00:01:39.298 gpudev: explicitly disabled via build config 00:01:39.298 gro: explicitly disabled via build config 00:01:39.298 gso: explicitly disabled via build config 00:01:39.298 ip_frag: explicitly disabled via build config 00:01:39.298 jobstats: explicitly disabled via build config 00:01:39.298 latencystats: explicitly disabled via build config 00:01:39.298 lpm: explicitly disabled via build config 00:01:39.298 member: explicitly disabled via build config 00:01:39.298 pcapng: explicitly disabled via build config 00:01:39.298 rawdev: explicitly disabled via build config 00:01:39.298 regexdev: explicitly disabled via build config 00:01:39.298 mldev: explicitly disabled via build config 00:01:39.298 rib: explicitly disabled via build config 00:01:39.298 sched: explicitly disabled via build config 00:01:39.298 stack: explicitly disabled via build config 00:01:39.298 ipsec: explicitly disabled via build config 00:01:39.298 pdcp: explicitly disabled via build config 00:01:39.298 fib: explicitly disabled via build config 00:01:39.298 port: explicitly disabled via build config 00:01:39.298 pdump: explicitly disabled via build config 00:01:39.298 table: explicitly disabled via build config 00:01:39.298 pipeline: explicitly disabled via build config 00:01:39.298 graph: explicitly disabled via build config 00:01:39.298 node: explicitly disabled via build config 00:01:39.298 00:01:39.298 drivers: 00:01:39.298 common/cpt: not in enabled drivers build config 00:01:39.298 common/dpaax: not in enabled drivers build config 00:01:39.298 common/iavf: not in enabled drivers build config 00:01:39.298 common/idpf: not in enabled drivers build config 00:01:39.298 common/ionic: not in enabled drivers build config 00:01:39.298 common/mvep: not in enabled drivers build config 00:01:39.298 common/octeontx: not in enabled drivers build config 00:01:39.298 bus/auxiliary: not in enabled drivers build config 00:01:39.298 bus/cdx: not in enabled drivers build config 00:01:39.298 bus/dpaa: not in enabled drivers build config 00:01:39.298 bus/fslmc: not in enabled drivers build config 00:01:39.298 bus/ifpga: not in enabled drivers build config 00:01:39.298 bus/platform: not in enabled drivers build config 00:01:39.298 bus/uacce: not in enabled drivers build config 00:01:39.298 bus/vmbus: not in enabled drivers build config 00:01:39.298 common/cnxk: not in enabled drivers build config 00:01:39.298 common/mlx5: not in enabled drivers build config 00:01:39.298 common/nfp: not in enabled drivers build config 00:01:39.299 common/nitrox: not in enabled drivers build config 00:01:39.299 common/qat: not in enabled drivers build config 00:01:39.299 common/sfc_efx: not in enabled drivers build config 00:01:39.299 mempool/bucket: not in enabled drivers build config 00:01:39.299 mempool/cnxk: not in enabled drivers build config 00:01:39.299 mempool/dpaa: not in enabled drivers build config 00:01:39.299 mempool/dpaa2: not in enabled drivers build config 00:01:39.299 mempool/octeontx: not in enabled drivers build config 00:01:39.299 mempool/stack: not in enabled drivers build config 00:01:39.299 dma/cnxk: not in enabled drivers build config 00:01:39.299 dma/dpaa: not in enabled drivers build config 00:01:39.299 dma/dpaa2: not in enabled drivers build config 00:01:39.299 dma/hisilicon: not in enabled drivers build config 00:01:39.299 dma/idxd: not in enabled drivers build config 00:01:39.299 dma/ioat: not in enabled drivers build config 00:01:39.299 dma/skeleton: not in enabled drivers build config 00:01:39.299 net/af_packet: not in enabled drivers build config 00:01:39.299 net/af_xdp: not in enabled drivers build config 00:01:39.299 net/ark: not in enabled drivers build config 00:01:39.299 net/atlantic: not in enabled drivers build config 00:01:39.299 net/avp: not in enabled drivers build config 00:01:39.299 net/axgbe: not in enabled drivers build config 00:01:39.299 net/bnx2x: not in enabled drivers build config 00:01:39.299 net/bnxt: not in enabled drivers build config 00:01:39.299 net/bonding: not in enabled drivers build config 00:01:39.299 net/cnxk: not in enabled drivers build config 00:01:39.299 net/cpfl: not in enabled drivers build config 00:01:39.299 net/cxgbe: not in enabled drivers build config 00:01:39.299 net/dpaa: not in enabled drivers build config 00:01:39.299 net/dpaa2: not in enabled drivers build config 00:01:39.299 net/e1000: not in enabled drivers build config 00:01:39.299 net/ena: not in enabled drivers build config 00:01:39.299 net/enetc: not in enabled drivers build config 00:01:39.299 net/enetfec: not in enabled drivers build config 00:01:39.299 net/enic: not in enabled drivers build config 00:01:39.299 net/failsafe: not in enabled drivers build config 00:01:39.299 net/fm10k: not in enabled drivers build config 00:01:39.299 net/gve: not in enabled drivers build config 00:01:39.299 net/hinic: not in enabled drivers build config 00:01:39.299 net/hns3: not in enabled drivers build config 00:01:39.299 net/i40e: not in enabled drivers build config 00:01:39.299 net/iavf: not in enabled drivers build config 00:01:39.299 net/ice: not in enabled drivers build config 00:01:39.299 net/idpf: not in enabled drivers build config 00:01:39.299 net/igc: not in enabled drivers build config 00:01:39.299 net/ionic: not in enabled drivers build config 00:01:39.299 net/ipn3ke: not in enabled drivers build config 00:01:39.299 net/ixgbe: not in enabled drivers build config 00:01:39.299 net/mana: not in enabled drivers build config 00:01:39.299 net/memif: not in enabled drivers build config 00:01:39.299 net/mlx4: not in enabled drivers build config 00:01:39.299 net/mlx5: not in enabled drivers build config 00:01:39.299 net/mvneta: not in enabled drivers build config 00:01:39.299 net/mvpp2: not in enabled drivers build config 00:01:39.299 net/netvsc: not in enabled drivers build config 00:01:39.299 net/nfb: not in enabled drivers build config 00:01:39.299 net/nfp: not in enabled drivers build config 00:01:39.299 net/ngbe: not in enabled drivers build config 00:01:39.299 net/null: not in enabled drivers build config 00:01:39.299 net/octeontx: not in enabled drivers build config 00:01:39.299 net/octeon_ep: not in enabled drivers build config 00:01:39.299 net/pcap: not in enabled drivers build config 00:01:39.299 net/pfe: not in enabled drivers build config 00:01:39.299 net/qede: not in enabled drivers build config 00:01:39.299 net/ring: not in enabled drivers build config 00:01:39.299 net/sfc: not in enabled drivers build config 00:01:39.299 net/softnic: not in enabled drivers build config 00:01:39.299 net/tap: not in enabled drivers build config 00:01:39.299 net/thunderx: not in enabled drivers build config 00:01:39.299 net/txgbe: not in enabled drivers build config 00:01:39.299 net/vdev_netvsc: not in enabled drivers build config 00:01:39.299 net/vhost: not in enabled drivers build config 00:01:39.299 net/virtio: not in enabled drivers build config 00:01:39.299 net/vmxnet3: not in enabled drivers build config 00:01:39.299 raw/*: missing internal dependency, "rawdev" 00:01:39.299 crypto/armv8: not in enabled drivers build config 00:01:39.299 crypto/bcmfs: not in enabled drivers build config 00:01:39.299 crypto/caam_jr: not in enabled drivers build config 00:01:39.299 crypto/ccp: not in enabled drivers build config 00:01:39.299 crypto/cnxk: not in enabled drivers build config 00:01:39.299 crypto/dpaa_sec: not in enabled drivers build config 00:01:39.299 crypto/dpaa2_sec: not in enabled drivers build config 00:01:39.299 crypto/ipsec_mb: not in enabled drivers build config 00:01:39.299 crypto/mlx5: not in enabled drivers build config 00:01:39.299 crypto/mvsam: not in enabled drivers build config 00:01:39.299 crypto/nitrox: not in enabled drivers build config 00:01:39.299 crypto/null: not in enabled drivers build config 00:01:39.299 crypto/octeontx: not in enabled drivers build config 00:01:39.299 crypto/openssl: not in enabled drivers build config 00:01:39.299 crypto/scheduler: not in enabled drivers build config 00:01:39.299 crypto/uadk: not in enabled drivers build config 00:01:39.299 crypto/virtio: not in enabled drivers build config 00:01:39.299 compress/isal: not in enabled drivers build config 00:01:39.299 compress/mlx5: not in enabled drivers build config 00:01:39.299 compress/nitrox: not in enabled drivers build config 00:01:39.299 compress/octeontx: not in enabled drivers build config 00:01:39.299 compress/zlib: not in enabled drivers build config 00:01:39.299 regex/*: missing internal dependency, "regexdev" 00:01:39.299 ml/*: missing internal dependency, "mldev" 00:01:39.299 vdpa/ifc: not in enabled drivers build config 00:01:39.299 vdpa/mlx5: not in enabled drivers build config 00:01:39.299 vdpa/nfp: not in enabled drivers build config 00:01:39.299 vdpa/sfc: not in enabled drivers build config 00:01:39.299 event/*: missing internal dependency, "eventdev" 00:01:39.299 baseband/*: missing internal dependency, "bbdev" 00:01:39.299 gpu/*: missing internal dependency, "gpudev" 00:01:39.299 00:01:39.299 00:01:39.556 Build targets in project: 85 00:01:39.556 00:01:39.556 DPDK 24.03.0 00:01:39.556 00:01:39.556 User defined options 00:01:39.556 buildtype : debug 00:01:39.556 default_library : shared 00:01:39.556 libdir : lib 00:01:39.556 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:39.556 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:39.556 c_link_args : 00:01:39.556 cpu_instruction_set: native 00:01:39.556 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:39.556 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:39.556 enable_docs : false 00:01:39.556 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:39.556 enable_kmods : false 00:01:39.556 max_lcores : 128 00:01:39.556 tests : false 00:01:39.556 00:01:39.556 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.830 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:39.830 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:40.097 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:40.097 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:40.097 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:40.097 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:40.097 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:40.097 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:40.097 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:40.097 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:40.097 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:40.097 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:40.097 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:40.097 [13/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:40.097 [14/268] Linking static target lib/librte_kvargs.a 00:01:40.097 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:40.097 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:40.097 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:40.097 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:40.097 [19/268] Linking static target lib/librte_log.a 00:01:40.358 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:40.358 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:40.358 [22/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:40.358 [23/268] Linking static target lib/librte_pci.a 00:01:40.358 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:40.358 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:40.358 [26/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:40.358 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:40.358 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:40.358 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:40.358 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:40.358 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:40.358 [32/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:40.358 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:40.358 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:40.358 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:40.616 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:40.616 [37/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:40.616 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:40.616 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:40.616 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:40.616 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:40.616 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:40.616 [43/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:40.616 [44/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:40.616 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:40.616 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:40.616 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:40.616 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:40.616 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:40.616 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:40.616 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:40.616 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:40.616 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:40.617 [54/268] Linking static target lib/librte_meter.a 00:01:40.617 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:40.617 [56/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:40.617 [57/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:40.617 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:40.617 [59/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:40.617 [60/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:40.617 [61/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:40.617 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:40.617 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:40.617 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:40.617 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:40.617 [66/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:40.617 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:40.617 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:40.617 [69/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:40.617 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:40.617 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:40.617 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:40.617 [73/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:40.617 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:40.617 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:40.617 [76/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:40.617 [77/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:40.617 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:40.617 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:40.617 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:40.617 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:40.617 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:40.617 [83/268] Linking static target lib/librte_ring.a 00:01:40.617 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:40.617 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:40.617 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:40.617 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:40.617 [88/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.617 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:40.617 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:40.617 [91/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:40.617 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:40.617 [93/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:40.617 [94/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.617 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:40.617 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:40.617 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:40.617 [98/268] Linking static target lib/librte_telemetry.a 00:01:40.617 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:40.617 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:40.617 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.617 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:40.617 [103/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:40.617 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:40.617 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:40.617 [106/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.617 [107/268] Linking static target lib/librte_mempool.a 00:01:40.617 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.617 [109/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:40.617 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:40.617 [111/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.617 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:40.876 [113/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:40.876 [114/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.876 [115/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.876 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:40.876 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.876 [118/268] Linking static target lib/librte_rcu.a 00:01:40.876 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.876 [120/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.876 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.876 [122/268] Linking static target lib/librte_net.a 00:01:40.876 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.876 [124/268] Linking static target lib/librte_eal.a 00:01:40.876 [125/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:40.876 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.876 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:40.876 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.876 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.876 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.876 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.876 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.876 [133/268] Linking static target lib/librte_cmdline.a 00:01:40.876 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.876 [135/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:40.876 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.876 [137/268] Linking static target lib/librte_mbuf.a 00:01:40.876 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.876 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.876 [140/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:40.876 [141/268] Linking target lib/librte_log.so.24.1 00:01:40.876 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.876 [143/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:40.876 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:40.876 [145/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.876 [146/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:41.134 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:41.134 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:41.134 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:41.134 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:41.134 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:41.134 [152/268] Linking static target lib/librte_dmadev.a 00:01:41.134 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:41.134 [154/268] Linking static target lib/librte_timer.a 00:01:41.134 [155/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.134 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:41.134 [157/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:41.134 [158/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:41.134 [159/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.134 [160/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:41.134 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:41.134 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:41.134 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:41.134 [164/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:41.134 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:41.134 [166/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:41.134 [167/268] Linking target lib/librte_kvargs.so.24.1 00:01:41.134 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:41.134 [169/268] Linking target lib/librte_telemetry.so.24.1 00:01:41.134 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:41.134 [171/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:41.134 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:41.134 [173/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:41.134 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:41.134 [175/268] Linking static target lib/librte_power.a 00:01:41.134 [176/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:41.134 [177/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:41.135 [178/268] Linking static target lib/librte_reorder.a 00:01:41.135 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:41.135 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:41.135 [181/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:41.135 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:41.135 [183/268] Linking static target lib/librte_compressdev.a 00:01:41.135 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:41.135 [185/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:41.135 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:41.135 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:41.135 [188/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:41.135 [189/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:41.135 [190/268] Linking static target lib/librte_hash.a 00:01:41.394 [191/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:41.394 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:41.394 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:41.394 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:41.394 [195/268] Linking static target lib/librte_security.a 00:01:41.394 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.394 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:41.394 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.394 [199/268] Linking static target drivers/librte_bus_vdev.a 00:01:41.394 [200/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:41.394 [201/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:41.394 [202/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.394 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:41.394 [204/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:41.394 [205/268] Linking static target lib/librte_cryptodev.a 00:01:41.394 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:41.394 [207/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.394 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.394 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.394 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:41.653 [211/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:41.653 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.653 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.653 [214/268] Linking static target drivers/librte_mempool_ring.a 00:01:41.653 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.653 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.653 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.653 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.653 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.653 [220/268] Linking static target lib/librte_ethdev.a 00:01:41.911 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.911 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.911 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.170 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.170 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:42.170 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.170 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.105 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:43.105 [229/268] Linking static target lib/librte_vhost.a 00:01:43.364 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.264 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.532 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.791 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.791 [234/268] Linking target lib/librte_eal.so.24.1 00:01:51.050 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:51.050 [236/268] Linking target lib/librte_ring.so.24.1 00:01:51.050 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:51.050 [238/268] Linking target lib/librte_pci.so.24.1 00:01:51.050 [239/268] Linking target lib/librte_meter.so.24.1 00:01:51.050 [240/268] Linking target lib/librte_timer.so.24.1 00:01:51.050 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:51.050 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:51.050 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:51.050 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:51.050 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:51.050 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:51.050 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:51.050 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:51.050 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:51.308 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:51.308 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:51.308 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:51.308 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:51.566 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:51.566 [255/268] Linking target lib/librte_net.so.24.1 00:01:51.566 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:51.566 [257/268] Linking target lib/librte_reorder.so.24.1 00:01:51.566 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:51.566 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:51.566 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:51.566 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:51.566 [262/268] Linking target lib/librte_hash.so.24.1 00:01:51.832 [263/268] Linking target lib/librte_security.so.24.1 00:01:51.832 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:51.832 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:51.832 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:51.832 [267/268] Linking target lib/librte_power.so.24.1 00:01:51.832 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:51.832 INFO: autodetecting backend as ninja 00:01:51.832 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:04.039 CC lib/log/log.o 00:02:04.039 CC lib/log/log_flags.o 00:02:04.039 CC lib/log/log_deprecated.o 00:02:04.039 CC lib/ut/ut.o 00:02:04.039 CC lib/ut_mock/mock.o 00:02:04.039 LIB libspdk_log.a 00:02:04.039 LIB libspdk_ut.a 00:02:04.039 LIB libspdk_ut_mock.a 00:02:04.039 SO libspdk_log.so.7.1 00:02:04.039 SO libspdk_ut.so.2.0 00:02:04.039 SO libspdk_ut_mock.so.6.0 00:02:04.039 SYMLINK libspdk_log.so 00:02:04.039 SYMLINK libspdk_ut_mock.so 00:02:04.039 SYMLINK libspdk_ut.so 00:02:04.039 CC lib/util/base64.o 00:02:04.039 CC lib/util/bit_array.o 00:02:04.039 CC lib/util/cpuset.o 00:02:04.039 CC lib/util/crc32.o 00:02:04.039 CC lib/util/crc16.o 00:02:04.039 CC lib/util/crc32c.o 00:02:04.039 CC lib/util/crc64.o 00:02:04.039 CC lib/util/crc32_ieee.o 00:02:04.039 CC lib/util/dif.o 00:02:04.039 CC lib/util/fd.o 00:02:04.039 CC lib/util/fd_group.o 00:02:04.039 CC lib/util/file.o 00:02:04.039 CC lib/dma/dma.o 00:02:04.039 CC lib/util/hexlify.o 00:02:04.039 CC lib/util/iov.o 00:02:04.039 CC lib/ioat/ioat.o 00:02:04.039 CC lib/util/math.o 00:02:04.039 CXX lib/trace_parser/trace.o 00:02:04.039 CC lib/util/net.o 00:02:04.039 CC lib/util/pipe.o 00:02:04.039 CC lib/util/strerror_tls.o 00:02:04.039 CC lib/util/string.o 00:02:04.039 CC lib/util/uuid.o 00:02:04.039 CC lib/util/xor.o 00:02:04.039 CC lib/util/zipf.o 00:02:04.039 CC lib/util/md5.o 00:02:04.039 CC lib/vfio_user/host/vfio_user_pci.o 00:02:04.039 CC lib/vfio_user/host/vfio_user.o 00:02:04.039 LIB libspdk_dma.a 00:02:04.039 SO libspdk_dma.so.5.0 00:02:04.039 SYMLINK libspdk_dma.so 00:02:04.039 LIB libspdk_ioat.a 00:02:04.039 SO libspdk_ioat.so.7.0 00:02:04.039 LIB libspdk_vfio_user.a 00:02:04.039 SYMLINK libspdk_ioat.so 00:02:04.039 SO libspdk_vfio_user.so.5.0 00:02:04.039 SYMLINK libspdk_vfio_user.so 00:02:04.039 LIB libspdk_util.a 00:02:04.039 SO libspdk_util.so.10.1 00:02:04.039 SYMLINK libspdk_util.so 00:02:04.040 LIB libspdk_trace_parser.a 00:02:04.040 SO libspdk_trace_parser.so.6.0 00:02:04.040 SYMLINK libspdk_trace_parser.so 00:02:04.040 CC lib/conf/conf.o 00:02:04.040 CC lib/json/json_parse.o 00:02:04.040 CC lib/json/json_util.o 00:02:04.040 CC lib/rdma_utils/rdma_utils.o 00:02:04.040 CC lib/json/json_write.o 00:02:04.040 CC lib/env_dpdk/env.o 00:02:04.040 CC lib/env_dpdk/memory.o 00:02:04.040 CC lib/idxd/idxd.o 00:02:04.040 CC lib/env_dpdk/pci.o 00:02:04.040 CC lib/idxd/idxd_user.o 00:02:04.040 CC lib/vmd/vmd.o 00:02:04.040 CC lib/env_dpdk/init.o 00:02:04.040 CC lib/env_dpdk/threads.o 00:02:04.040 CC lib/idxd/idxd_kernel.o 00:02:04.040 CC lib/vmd/led.o 00:02:04.040 CC lib/env_dpdk/pci_ioat.o 00:02:04.040 CC lib/env_dpdk/pci_virtio.o 00:02:04.040 CC lib/env_dpdk/pci_vmd.o 00:02:04.040 CC lib/env_dpdk/pci_idxd.o 00:02:04.040 CC lib/env_dpdk/pci_event.o 00:02:04.040 CC lib/env_dpdk/sigbus_handler.o 00:02:04.040 CC lib/env_dpdk/pci_dpdk.o 00:02:04.040 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:04.040 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:04.298 LIB libspdk_conf.a 00:02:04.298 SO libspdk_conf.so.6.0 00:02:04.298 LIB libspdk_rdma_utils.a 00:02:04.298 LIB libspdk_json.a 00:02:04.558 SO libspdk_rdma_utils.so.1.0 00:02:04.558 SO libspdk_json.so.6.0 00:02:04.558 SYMLINK libspdk_conf.so 00:02:04.558 SYMLINK libspdk_rdma_utils.so 00:02:04.558 SYMLINK libspdk_json.so 00:02:04.558 LIB libspdk_idxd.a 00:02:04.558 SO libspdk_idxd.so.12.1 00:02:04.558 LIB libspdk_vmd.a 00:02:04.818 SO libspdk_vmd.so.6.0 00:02:04.818 SYMLINK libspdk_idxd.so 00:02:04.818 SYMLINK libspdk_vmd.so 00:02:04.818 CC lib/rdma_provider/common.o 00:02:04.818 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:04.818 CC lib/jsonrpc/jsonrpc_server.o 00:02:04.818 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:04.818 CC lib/jsonrpc/jsonrpc_client.o 00:02:04.818 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:05.078 LIB libspdk_rdma_provider.a 00:02:05.078 SO libspdk_rdma_provider.so.7.0 00:02:05.078 LIB libspdk_jsonrpc.a 00:02:05.078 SO libspdk_jsonrpc.so.6.0 00:02:05.078 SYMLINK libspdk_rdma_provider.so 00:02:05.078 SYMLINK libspdk_jsonrpc.so 00:02:05.078 LIB libspdk_env_dpdk.a 00:02:05.338 SO libspdk_env_dpdk.so.15.1 00:02:05.338 SYMLINK libspdk_env_dpdk.so 00:02:05.338 CC lib/rpc/rpc.o 00:02:05.598 LIB libspdk_rpc.a 00:02:05.598 SO libspdk_rpc.so.6.0 00:02:05.598 SYMLINK libspdk_rpc.so 00:02:06.166 CC lib/notify/notify.o 00:02:06.166 CC lib/notify/notify_rpc.o 00:02:06.166 CC lib/keyring/keyring.o 00:02:06.166 CC lib/keyring/keyring_rpc.o 00:02:06.166 CC lib/trace/trace.o 00:02:06.166 CC lib/trace/trace_flags.o 00:02:06.166 CC lib/trace/trace_rpc.o 00:02:06.166 LIB libspdk_notify.a 00:02:06.166 LIB libspdk_keyring.a 00:02:06.166 SO libspdk_notify.so.6.0 00:02:06.166 LIB libspdk_trace.a 00:02:06.166 SO libspdk_keyring.so.2.0 00:02:06.427 SYMLINK libspdk_notify.so 00:02:06.427 SO libspdk_trace.so.11.0 00:02:06.427 SYMLINK libspdk_keyring.so 00:02:06.427 SYMLINK libspdk_trace.so 00:02:06.687 CC lib/sock/sock.o 00:02:06.687 CC lib/sock/sock_rpc.o 00:02:06.687 CC lib/thread/thread.o 00:02:06.687 CC lib/thread/iobuf.o 00:02:06.945 LIB libspdk_sock.a 00:02:06.945 SO libspdk_sock.so.10.0 00:02:07.204 SYMLINK libspdk_sock.so 00:02:07.463 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:07.463 CC lib/nvme/nvme_ctrlr.o 00:02:07.463 CC lib/nvme/nvme_fabric.o 00:02:07.463 CC lib/nvme/nvme_ns_cmd.o 00:02:07.463 CC lib/nvme/nvme_ns.o 00:02:07.463 CC lib/nvme/nvme_pcie_common.o 00:02:07.463 CC lib/nvme/nvme_pcie.o 00:02:07.463 CC lib/nvme/nvme_qpair.o 00:02:07.463 CC lib/nvme/nvme.o 00:02:07.463 CC lib/nvme/nvme_quirks.o 00:02:07.463 CC lib/nvme/nvme_transport.o 00:02:07.463 CC lib/nvme/nvme_discovery.o 00:02:07.463 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:07.463 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:07.463 CC lib/nvme/nvme_tcp.o 00:02:07.463 CC lib/nvme/nvme_opal.o 00:02:07.463 CC lib/nvme/nvme_io_msg.o 00:02:07.463 CC lib/nvme/nvme_poll_group.o 00:02:07.463 CC lib/nvme/nvme_zns.o 00:02:07.463 CC lib/nvme/nvme_stubs.o 00:02:07.463 CC lib/nvme/nvme_auth.o 00:02:07.463 CC lib/nvme/nvme_cuse.o 00:02:07.463 CC lib/nvme/nvme_vfio_user.o 00:02:07.463 CC lib/nvme/nvme_rdma.o 00:02:07.723 LIB libspdk_thread.a 00:02:07.723 SO libspdk_thread.so.11.0 00:02:07.723 SYMLINK libspdk_thread.so 00:02:07.981 CC lib/accel/accel.o 00:02:07.981 CC lib/accel/accel_rpc.o 00:02:07.981 CC lib/accel/accel_sw.o 00:02:08.239 CC lib/virtio/virtio.o 00:02:08.240 CC lib/virtio/virtio_vhost_user.o 00:02:08.240 CC lib/virtio/virtio_pci.o 00:02:08.240 CC lib/virtio/virtio_vfio_user.o 00:02:08.240 CC lib/blob/blobstore.o 00:02:08.240 CC lib/blob/zeroes.o 00:02:08.240 CC lib/blob/request.o 00:02:08.240 CC lib/blob/blob_bs_dev.o 00:02:08.240 CC lib/init/json_config.o 00:02:08.240 CC lib/vfu_tgt/tgt_endpoint.o 00:02:08.240 CC lib/init/subsystem.o 00:02:08.240 CC lib/init/subsystem_rpc.o 00:02:08.240 CC lib/vfu_tgt/tgt_rpc.o 00:02:08.240 CC lib/init/rpc.o 00:02:08.240 CC lib/fsdev/fsdev.o 00:02:08.240 CC lib/fsdev/fsdev_rpc.o 00:02:08.240 CC lib/fsdev/fsdev_io.o 00:02:08.240 LIB libspdk_init.a 00:02:08.498 SO libspdk_init.so.6.0 00:02:08.498 LIB libspdk_virtio.a 00:02:08.498 LIB libspdk_vfu_tgt.a 00:02:08.498 SYMLINK libspdk_init.so 00:02:08.498 SO libspdk_virtio.so.7.0 00:02:08.498 SO libspdk_vfu_tgt.so.3.0 00:02:08.498 SYMLINK libspdk_virtio.so 00:02:08.498 SYMLINK libspdk_vfu_tgt.so 00:02:08.756 LIB libspdk_fsdev.a 00:02:08.756 SO libspdk_fsdev.so.2.0 00:02:08.756 CC lib/event/app.o 00:02:08.756 CC lib/event/reactor.o 00:02:08.756 CC lib/event/log_rpc.o 00:02:08.756 CC lib/event/app_rpc.o 00:02:08.756 CC lib/event/scheduler_static.o 00:02:08.756 SYMLINK libspdk_fsdev.so 00:02:09.014 LIB libspdk_accel.a 00:02:09.014 SO libspdk_accel.so.16.0 00:02:09.014 SYMLINK libspdk_accel.so 00:02:09.014 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:09.014 LIB libspdk_event.a 00:02:09.014 LIB libspdk_nvme.a 00:02:09.014 SO libspdk_event.so.14.0 00:02:09.273 SO libspdk_nvme.so.15.0 00:02:09.273 SYMLINK libspdk_event.so 00:02:09.273 CC lib/bdev/bdev.o 00:02:09.273 CC lib/bdev/bdev_rpc.o 00:02:09.273 CC lib/bdev/bdev_zone.o 00:02:09.273 CC lib/bdev/part.o 00:02:09.273 CC lib/bdev/scsi_nvme.o 00:02:09.273 SYMLINK libspdk_nvme.so 00:02:09.532 LIB libspdk_fuse_dispatcher.a 00:02:09.532 SO libspdk_fuse_dispatcher.so.1.0 00:02:09.791 SYMLINK libspdk_fuse_dispatcher.so 00:02:10.359 LIB libspdk_blob.a 00:02:10.359 SO libspdk_blob.so.11.0 00:02:10.359 SYMLINK libspdk_blob.so 00:02:10.618 CC lib/blobfs/blobfs.o 00:02:10.618 CC lib/blobfs/tree.o 00:02:10.618 CC lib/lvol/lvol.o 00:02:11.185 LIB libspdk_bdev.a 00:02:11.185 LIB libspdk_blobfs.a 00:02:11.185 SO libspdk_bdev.so.17.0 00:02:11.185 SO libspdk_blobfs.so.10.0 00:02:11.445 SYMLINK libspdk_bdev.so 00:02:11.445 SYMLINK libspdk_blobfs.so 00:02:11.445 LIB libspdk_lvol.a 00:02:11.445 SO libspdk_lvol.so.10.0 00:02:11.445 SYMLINK libspdk_lvol.so 00:02:11.704 CC lib/nvmf/ctrlr.o 00:02:11.704 CC lib/nbd/nbd.o 00:02:11.704 CC lib/nvmf/ctrlr_bdev.o 00:02:11.704 CC lib/ftl/ftl_core.o 00:02:11.704 CC lib/nvmf/ctrlr_discovery.o 00:02:11.704 CC lib/nbd/nbd_rpc.o 00:02:11.704 CC lib/ftl/ftl_init.o 00:02:11.704 CC lib/nvmf/subsystem.o 00:02:11.704 CC lib/ftl/ftl_layout.o 00:02:11.704 CC lib/nvmf/nvmf.o 00:02:11.704 CC lib/ftl/ftl_debug.o 00:02:11.704 CC lib/nvmf/nvmf_rpc.o 00:02:11.704 CC lib/nvmf/transport.o 00:02:11.704 CC lib/ftl/ftl_io.o 00:02:11.704 CC lib/ftl/ftl_sb.o 00:02:11.704 CC lib/nvmf/tcp.o 00:02:11.704 CC lib/ftl/ftl_l2p.o 00:02:11.704 CC lib/nvmf/stubs.o 00:02:11.704 CC lib/ftl/ftl_l2p_flat.o 00:02:11.704 CC lib/nvmf/mdns_server.o 00:02:11.704 CC lib/nvmf/vfio_user.o 00:02:11.704 CC lib/ftl/ftl_nv_cache.o 00:02:11.704 CC lib/nvmf/rdma.o 00:02:11.704 CC lib/ftl/ftl_band.o 00:02:11.704 CC lib/scsi/dev.o 00:02:11.704 CC lib/ublk/ublk.o 00:02:11.704 CC lib/nvmf/auth.o 00:02:11.704 CC lib/ublk/ublk_rpc.o 00:02:11.704 CC lib/scsi/port.o 00:02:11.704 CC lib/scsi/lun.o 00:02:11.704 CC lib/ftl/ftl_band_ops.o 00:02:11.704 CC lib/ftl/ftl_writer.o 00:02:11.704 CC lib/scsi/scsi.o 00:02:11.704 CC lib/ftl/ftl_rq.o 00:02:11.704 CC lib/scsi/scsi_bdev.o 00:02:11.704 CC lib/ftl/ftl_reloc.o 00:02:11.704 CC lib/ftl/ftl_p2l.o 00:02:11.704 CC lib/ftl/ftl_l2p_cache.o 00:02:11.704 CC lib/scsi/scsi_pr.o 00:02:11.705 CC lib/scsi/scsi_rpc.o 00:02:11.705 CC lib/scsi/task.o 00:02:11.705 CC lib/ftl/mngt/ftl_mngt.o 00:02:11.705 CC lib/ftl/ftl_p2l_log.o 00:02:11.705 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:11.705 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:11.705 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:11.705 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:11.705 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:11.705 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:11.705 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:11.705 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:11.705 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:11.705 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:11.705 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:11.705 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:11.705 CC lib/ftl/utils/ftl_conf.o 00:02:11.705 CC lib/ftl/utils/ftl_md.o 00:02:11.705 CC lib/ftl/utils/ftl_mempool.o 00:02:11.705 CC lib/ftl/utils/ftl_bitmap.o 00:02:11.705 CC lib/ftl/utils/ftl_property.o 00:02:11.705 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:11.705 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:11.705 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:11.705 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:11.705 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:11.705 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:11.705 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:11.705 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:11.705 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:11.705 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:11.705 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:11.705 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:11.705 CC lib/ftl/base/ftl_base_bdev.o 00:02:11.705 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:11.705 CC lib/ftl/base/ftl_base_dev.o 00:02:11.705 CC lib/ftl/ftl_trace.o 00:02:12.272 LIB libspdk_nbd.a 00:02:12.272 SO libspdk_nbd.so.7.0 00:02:12.272 LIB libspdk_ublk.a 00:02:12.272 SYMLINK libspdk_nbd.so 00:02:12.272 SO libspdk_ublk.so.3.0 00:02:12.530 LIB libspdk_scsi.a 00:02:12.530 SYMLINK libspdk_ublk.so 00:02:12.530 SO libspdk_scsi.so.9.0 00:02:12.530 SYMLINK libspdk_scsi.so 00:02:12.789 LIB libspdk_ftl.a 00:02:12.789 SO libspdk_ftl.so.9.0 00:02:12.789 CC lib/vhost/vhost.o 00:02:12.789 CC lib/vhost/vhost_rpc.o 00:02:12.789 CC lib/vhost/vhost_scsi.o 00:02:12.789 CC lib/vhost/vhost_blk.o 00:02:12.789 CC lib/vhost/rte_vhost_user.o 00:02:12.789 CC lib/iscsi/iscsi.o 00:02:12.789 CC lib/iscsi/conn.o 00:02:12.789 CC lib/iscsi/init_grp.o 00:02:12.789 CC lib/iscsi/param.o 00:02:12.789 CC lib/iscsi/portal_grp.o 00:02:12.789 CC lib/iscsi/tgt_node.o 00:02:12.789 CC lib/iscsi/iscsi_subsystem.o 00:02:12.789 CC lib/iscsi/iscsi_rpc.o 00:02:12.789 CC lib/iscsi/task.o 00:02:13.047 SYMLINK libspdk_ftl.so 00:02:13.615 LIB libspdk_nvmf.a 00:02:13.615 SO libspdk_nvmf.so.20.0 00:02:13.615 LIB libspdk_vhost.a 00:02:13.615 SO libspdk_vhost.so.8.0 00:02:13.873 SYMLINK libspdk_nvmf.so 00:02:13.873 SYMLINK libspdk_vhost.so 00:02:13.873 LIB libspdk_iscsi.a 00:02:13.873 SO libspdk_iscsi.so.8.0 00:02:14.132 SYMLINK libspdk_iscsi.so 00:02:14.701 CC module/env_dpdk/env_dpdk_rpc.o 00:02:14.701 CC module/vfu_device/vfu_virtio.o 00:02:14.701 CC module/vfu_device/vfu_virtio_blk.o 00:02:14.701 CC module/vfu_device/vfu_virtio_scsi.o 00:02:14.701 CC module/vfu_device/vfu_virtio_rpc.o 00:02:14.701 CC module/vfu_device/vfu_virtio_fs.o 00:02:14.701 CC module/keyring/linux/keyring.o 00:02:14.701 CC module/scheduler/gscheduler/gscheduler.o 00:02:14.701 CC module/blob/bdev/blob_bdev.o 00:02:14.701 CC module/keyring/linux/keyring_rpc.o 00:02:14.701 CC module/accel/dsa/accel_dsa.o 00:02:14.701 CC module/accel/error/accel_error.o 00:02:14.701 CC module/accel/error/accel_error_rpc.o 00:02:14.701 CC module/accel/dsa/accel_dsa_rpc.o 00:02:14.701 CC module/sock/posix/posix.o 00:02:14.701 LIB libspdk_env_dpdk_rpc.a 00:02:14.701 CC module/keyring/file/keyring.o 00:02:14.701 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:14.701 CC module/keyring/file/keyring_rpc.o 00:02:14.701 CC module/fsdev/aio/fsdev_aio.o 00:02:14.701 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:14.701 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:14.701 CC module/fsdev/aio/linux_aio_mgr.o 00:02:14.701 CC module/accel/iaa/accel_iaa.o 00:02:14.701 CC module/accel/iaa/accel_iaa_rpc.o 00:02:14.701 CC module/accel/ioat/accel_ioat.o 00:02:14.701 CC module/accel/ioat/accel_ioat_rpc.o 00:02:14.701 SO libspdk_env_dpdk_rpc.so.6.0 00:02:14.961 SYMLINK libspdk_env_dpdk_rpc.so 00:02:14.961 LIB libspdk_keyring_linux.a 00:02:14.961 LIB libspdk_scheduler_gscheduler.a 00:02:14.961 LIB libspdk_keyring_file.a 00:02:14.961 LIB libspdk_scheduler_dpdk_governor.a 00:02:14.961 SO libspdk_scheduler_gscheduler.so.4.0 00:02:14.961 SO libspdk_keyring_linux.so.1.0 00:02:14.961 LIB libspdk_accel_ioat.a 00:02:14.961 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:14.961 LIB libspdk_accel_error.a 00:02:14.961 SO libspdk_keyring_file.so.2.0 00:02:14.961 LIB libspdk_scheduler_dynamic.a 00:02:14.961 SO libspdk_accel_ioat.so.6.0 00:02:14.961 LIB libspdk_accel_iaa.a 00:02:14.961 SO libspdk_scheduler_dynamic.so.4.0 00:02:14.961 SO libspdk_accel_error.so.2.0 00:02:14.961 SYMLINK libspdk_scheduler_gscheduler.so 00:02:14.961 SYMLINK libspdk_keyring_linux.so 00:02:14.961 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:14.961 LIB libspdk_accel_dsa.a 00:02:14.961 LIB libspdk_blob_bdev.a 00:02:14.961 SYMLINK libspdk_keyring_file.so 00:02:14.961 SO libspdk_accel_iaa.so.3.0 00:02:14.961 SYMLINK libspdk_scheduler_dynamic.so 00:02:14.961 SO libspdk_accel_dsa.so.5.0 00:02:14.961 SYMLINK libspdk_accel_ioat.so 00:02:14.961 SYMLINK libspdk_accel_error.so 00:02:14.961 SO libspdk_blob_bdev.so.11.0 00:02:14.961 SYMLINK libspdk_accel_iaa.so 00:02:14.961 SYMLINK libspdk_accel_dsa.so 00:02:14.961 SYMLINK libspdk_blob_bdev.so 00:02:15.219 LIB libspdk_vfu_device.a 00:02:15.219 SO libspdk_vfu_device.so.3.0 00:02:15.219 SYMLINK libspdk_vfu_device.so 00:02:15.219 LIB libspdk_fsdev_aio.a 00:02:15.219 SO libspdk_fsdev_aio.so.1.0 00:02:15.219 LIB libspdk_sock_posix.a 00:02:15.477 SO libspdk_sock_posix.so.6.0 00:02:15.477 SYMLINK libspdk_fsdev_aio.so 00:02:15.477 SYMLINK libspdk_sock_posix.so 00:02:15.477 CC module/bdev/delay/vbdev_delay.o 00:02:15.477 CC module/bdev/gpt/gpt.o 00:02:15.477 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:15.477 CC module/bdev/gpt/vbdev_gpt.o 00:02:15.477 CC module/bdev/error/vbdev_error.o 00:02:15.477 CC module/bdev/error/vbdev_error_rpc.o 00:02:15.477 CC module/bdev/aio/bdev_aio.o 00:02:15.477 CC module/bdev/aio/bdev_aio_rpc.o 00:02:15.477 CC module/blobfs/bdev/blobfs_bdev.o 00:02:15.477 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:15.477 CC module/bdev/malloc/bdev_malloc.o 00:02:15.477 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:15.477 CC module/bdev/lvol/vbdev_lvol.o 00:02:15.477 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:15.477 CC module/bdev/nvme/bdev_nvme.o 00:02:15.477 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:15.477 CC module/bdev/iscsi/bdev_iscsi.o 00:02:15.477 CC module/bdev/nvme/nvme_rpc.o 00:02:15.477 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:15.477 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:15.477 CC module/bdev/passthru/vbdev_passthru.o 00:02:15.477 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:15.477 CC module/bdev/nvme/bdev_mdns_client.o 00:02:15.477 CC module/bdev/nvme/vbdev_opal.o 00:02:15.477 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:15.477 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:15.477 CC module/bdev/ftl/bdev_ftl.o 00:02:15.477 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:15.477 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:15.477 CC module/bdev/null/bdev_null.o 00:02:15.477 CC module/bdev/null/bdev_null_rpc.o 00:02:15.477 CC module/bdev/split/vbdev_split_rpc.o 00:02:15.477 CC module/bdev/split/vbdev_split.o 00:02:15.477 CC module/bdev/raid/bdev_raid.o 00:02:15.477 CC module/bdev/raid/bdev_raid_rpc.o 00:02:15.477 CC module/bdev/raid/raid0.o 00:02:15.477 CC module/bdev/raid/bdev_raid_sb.o 00:02:15.477 CC module/bdev/raid/concat.o 00:02:15.477 CC module/bdev/raid/raid1.o 00:02:15.477 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:15.477 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:15.477 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:15.735 LIB libspdk_blobfs_bdev.a 00:02:15.735 SO libspdk_blobfs_bdev.so.6.0 00:02:15.994 LIB libspdk_bdev_split.a 00:02:15.994 LIB libspdk_bdev_gpt.a 00:02:15.994 LIB libspdk_bdev_error.a 00:02:15.994 SO libspdk_bdev_split.so.6.0 00:02:15.994 SO libspdk_bdev_gpt.so.6.0 00:02:15.994 LIB libspdk_bdev_null.a 00:02:15.994 SO libspdk_bdev_error.so.6.0 00:02:15.995 LIB libspdk_bdev_ftl.a 00:02:15.995 LIB libspdk_bdev_passthru.a 00:02:15.995 SYMLINK libspdk_blobfs_bdev.so 00:02:15.995 LIB libspdk_bdev_aio.a 00:02:15.995 SO libspdk_bdev_null.so.6.0 00:02:15.995 SO libspdk_bdev_ftl.so.6.0 00:02:15.995 LIB libspdk_bdev_delay.a 00:02:15.995 LIB libspdk_bdev_zone_block.a 00:02:15.995 SO libspdk_bdev_passthru.so.6.0 00:02:15.995 SYMLINK libspdk_bdev_gpt.so 00:02:15.995 SO libspdk_bdev_aio.so.6.0 00:02:15.995 SYMLINK libspdk_bdev_split.so 00:02:15.995 SYMLINK libspdk_bdev_error.so 00:02:15.995 SO libspdk_bdev_delay.so.6.0 00:02:15.995 SO libspdk_bdev_zone_block.so.6.0 00:02:15.995 LIB libspdk_bdev_malloc.a 00:02:15.995 SYMLINK libspdk_bdev_null.so 00:02:15.995 SYMLINK libspdk_bdev_ftl.so 00:02:15.995 LIB libspdk_bdev_iscsi.a 00:02:15.995 SYMLINK libspdk_bdev_passthru.so 00:02:15.995 SYMLINK libspdk_bdev_aio.so 00:02:15.995 SO libspdk_bdev_malloc.so.6.0 00:02:15.995 SYMLINK libspdk_bdev_zone_block.so 00:02:15.995 SYMLINK libspdk_bdev_delay.so 00:02:15.995 SO libspdk_bdev_iscsi.so.6.0 00:02:15.995 SYMLINK libspdk_bdev_malloc.so 00:02:15.995 SYMLINK libspdk_bdev_iscsi.so 00:02:15.995 LIB libspdk_bdev_virtio.a 00:02:15.995 LIB libspdk_bdev_lvol.a 00:02:16.253 SO libspdk_bdev_virtio.so.6.0 00:02:16.253 SO libspdk_bdev_lvol.so.6.0 00:02:16.253 SYMLINK libspdk_bdev_lvol.so 00:02:16.253 SYMLINK libspdk_bdev_virtio.so 00:02:16.512 LIB libspdk_bdev_raid.a 00:02:16.512 SO libspdk_bdev_raid.so.6.0 00:02:16.512 SYMLINK libspdk_bdev_raid.so 00:02:17.449 LIB libspdk_bdev_nvme.a 00:02:17.449 SO libspdk_bdev_nvme.so.7.1 00:02:17.709 SYMLINK libspdk_bdev_nvme.so 00:02:18.276 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:18.277 CC module/event/subsystems/vmd/vmd.o 00:02:18.277 CC module/event/subsystems/sock/sock.o 00:02:18.277 CC module/event/subsystems/iobuf/iobuf.o 00:02:18.277 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:18.277 CC module/event/subsystems/keyring/keyring.o 00:02:18.277 CC module/event/subsystems/scheduler/scheduler.o 00:02:18.277 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:18.277 CC module/event/subsystems/fsdev/fsdev.o 00:02:18.277 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:18.277 LIB libspdk_event_vhost_blk.a 00:02:18.277 LIB libspdk_event_fsdev.a 00:02:18.277 LIB libspdk_event_scheduler.a 00:02:18.536 LIB libspdk_event_keyring.a 00:02:18.536 LIB libspdk_event_iobuf.a 00:02:18.536 LIB libspdk_event_vmd.a 00:02:18.536 LIB libspdk_event_sock.a 00:02:18.536 LIB libspdk_event_vfu_tgt.a 00:02:18.536 SO libspdk_event_vhost_blk.so.3.0 00:02:18.536 SO libspdk_event_fsdev.so.1.0 00:02:18.536 SO libspdk_event_scheduler.so.4.0 00:02:18.536 SO libspdk_event_keyring.so.1.0 00:02:18.536 SO libspdk_event_iobuf.so.3.0 00:02:18.536 SO libspdk_event_vmd.so.6.0 00:02:18.536 SO libspdk_event_sock.so.5.0 00:02:18.536 SO libspdk_event_vfu_tgt.so.3.0 00:02:18.536 SYMLINK libspdk_event_fsdev.so 00:02:18.536 SYMLINK libspdk_event_vhost_blk.so 00:02:18.536 SYMLINK libspdk_event_scheduler.so 00:02:18.536 SYMLINK libspdk_event_keyring.so 00:02:18.536 SYMLINK libspdk_event_vmd.so 00:02:18.536 SYMLINK libspdk_event_sock.so 00:02:18.536 SYMLINK libspdk_event_iobuf.so 00:02:18.536 SYMLINK libspdk_event_vfu_tgt.so 00:02:18.795 CC module/event/subsystems/accel/accel.o 00:02:19.055 LIB libspdk_event_accel.a 00:02:19.055 SO libspdk_event_accel.so.6.0 00:02:19.055 SYMLINK libspdk_event_accel.so 00:02:19.315 CC module/event/subsystems/bdev/bdev.o 00:02:19.574 LIB libspdk_event_bdev.a 00:02:19.574 SO libspdk_event_bdev.so.6.0 00:02:19.574 SYMLINK libspdk_event_bdev.so 00:02:19.834 CC module/event/subsystems/ublk/ublk.o 00:02:19.834 CC module/event/subsystems/scsi/scsi.o 00:02:19.834 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:19.834 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:19.834 CC module/event/subsystems/nbd/nbd.o 00:02:20.093 LIB libspdk_event_nbd.a 00:02:20.093 LIB libspdk_event_ublk.a 00:02:20.093 SO libspdk_event_nbd.so.6.0 00:02:20.093 LIB libspdk_event_scsi.a 00:02:20.093 SO libspdk_event_ublk.so.3.0 00:02:20.093 SO libspdk_event_scsi.so.6.0 00:02:20.093 LIB libspdk_event_nvmf.a 00:02:20.093 SYMLINK libspdk_event_nbd.so 00:02:20.093 SYMLINK libspdk_event_ublk.so 00:02:20.093 SO libspdk_event_nvmf.so.6.0 00:02:20.093 SYMLINK libspdk_event_scsi.so 00:02:20.093 SYMLINK libspdk_event_nvmf.so 00:02:20.353 CC module/event/subsystems/iscsi/iscsi.o 00:02:20.613 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:20.613 LIB libspdk_event_vhost_scsi.a 00:02:20.613 LIB libspdk_event_iscsi.a 00:02:20.613 SO libspdk_event_vhost_scsi.so.3.0 00:02:20.613 SO libspdk_event_iscsi.so.6.0 00:02:20.613 SYMLINK libspdk_event_vhost_scsi.so 00:02:20.873 SYMLINK libspdk_event_iscsi.so 00:02:20.873 SO libspdk.so.6.0 00:02:20.873 SYMLINK libspdk.so 00:02:21.132 CXX app/trace/trace.o 00:02:21.425 CC app/trace_record/trace_record.o 00:02:21.425 CC app/spdk_nvme_discover/discovery_aer.o 00:02:21.425 CC app/spdk_top/spdk_top.o 00:02:21.425 CC app/spdk_lspci/spdk_lspci.o 00:02:21.425 TEST_HEADER include/spdk/accel.h 00:02:21.425 TEST_HEADER include/spdk/assert.h 00:02:21.425 TEST_HEADER include/spdk/accel_module.h 00:02:21.425 TEST_HEADER include/spdk/barrier.h 00:02:21.425 CC app/spdk_nvme_identify/identify.o 00:02:21.425 TEST_HEADER include/spdk/bdev_zone.h 00:02:21.425 TEST_HEADER include/spdk/base64.h 00:02:21.425 TEST_HEADER include/spdk/bdev.h 00:02:21.425 TEST_HEADER include/spdk/bdev_module.h 00:02:21.425 TEST_HEADER include/spdk/bit_array.h 00:02:21.425 TEST_HEADER include/spdk/blob_bdev.h 00:02:21.425 CC test/rpc_client/rpc_client_test.o 00:02:21.425 TEST_HEADER include/spdk/bit_pool.h 00:02:21.425 CC app/spdk_nvme_perf/perf.o 00:02:21.425 TEST_HEADER include/spdk/blobfs.h 00:02:21.425 TEST_HEADER include/spdk/blob.h 00:02:21.425 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:21.425 TEST_HEADER include/spdk/conf.h 00:02:21.425 TEST_HEADER include/spdk/config.h 00:02:21.425 TEST_HEADER include/spdk/cpuset.h 00:02:21.425 TEST_HEADER include/spdk/crc16.h 00:02:21.425 TEST_HEADER include/spdk/crc64.h 00:02:21.425 TEST_HEADER include/spdk/crc32.h 00:02:21.425 TEST_HEADER include/spdk/dif.h 00:02:21.425 TEST_HEADER include/spdk/dma.h 00:02:21.425 TEST_HEADER include/spdk/endian.h 00:02:21.425 TEST_HEADER include/spdk/env_dpdk.h 00:02:21.425 TEST_HEADER include/spdk/event.h 00:02:21.425 TEST_HEADER include/spdk/fd_group.h 00:02:21.425 TEST_HEADER include/spdk/env.h 00:02:21.425 TEST_HEADER include/spdk/file.h 00:02:21.425 TEST_HEADER include/spdk/fd.h 00:02:21.425 TEST_HEADER include/spdk/fsdev.h 00:02:21.425 TEST_HEADER include/spdk/fsdev_module.h 00:02:21.425 TEST_HEADER include/spdk/ftl.h 00:02:21.425 TEST_HEADER include/spdk/gpt_spec.h 00:02:21.425 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:21.425 TEST_HEADER include/spdk/histogram_data.h 00:02:21.425 TEST_HEADER include/spdk/idxd.h 00:02:21.425 TEST_HEADER include/spdk/idxd_spec.h 00:02:21.425 TEST_HEADER include/spdk/hexlify.h 00:02:21.425 TEST_HEADER include/spdk/init.h 00:02:21.425 TEST_HEADER include/spdk/ioat.h 00:02:21.425 TEST_HEADER include/spdk/ioat_spec.h 00:02:21.425 TEST_HEADER include/spdk/iscsi_spec.h 00:02:21.425 TEST_HEADER include/spdk/json.h 00:02:21.425 TEST_HEADER include/spdk/jsonrpc.h 00:02:21.425 TEST_HEADER include/spdk/keyring.h 00:02:21.425 CC app/spdk_dd/spdk_dd.o 00:02:21.425 TEST_HEADER include/spdk/keyring_module.h 00:02:21.425 TEST_HEADER include/spdk/log.h 00:02:21.425 CC app/nvmf_tgt/nvmf_main.o 00:02:21.425 TEST_HEADER include/spdk/memory.h 00:02:21.425 TEST_HEADER include/spdk/likely.h 00:02:21.425 TEST_HEADER include/spdk/md5.h 00:02:21.425 TEST_HEADER include/spdk/mmio.h 00:02:21.425 TEST_HEADER include/spdk/lvol.h 00:02:21.425 TEST_HEADER include/spdk/nbd.h 00:02:21.425 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:21.425 TEST_HEADER include/spdk/net.h 00:02:21.425 TEST_HEADER include/spdk/nvme_intel.h 00:02:21.425 TEST_HEADER include/spdk/notify.h 00:02:21.425 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:21.425 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:21.425 TEST_HEADER include/spdk/nvme.h 00:02:21.425 TEST_HEADER include/spdk/nvme_zns.h 00:02:21.425 TEST_HEADER include/spdk/nvme_spec.h 00:02:21.425 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:21.425 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:21.425 TEST_HEADER include/spdk/nvmf.h 00:02:21.425 TEST_HEADER include/spdk/nvmf_spec.h 00:02:21.425 TEST_HEADER include/spdk/nvmf_transport.h 00:02:21.425 TEST_HEADER include/spdk/opal.h 00:02:21.425 TEST_HEADER include/spdk/opal_spec.h 00:02:21.425 TEST_HEADER include/spdk/pci_ids.h 00:02:21.425 TEST_HEADER include/spdk/queue.h 00:02:21.425 TEST_HEADER include/spdk/reduce.h 00:02:21.425 TEST_HEADER include/spdk/pipe.h 00:02:21.425 CC app/iscsi_tgt/iscsi_tgt.o 00:02:21.425 TEST_HEADER include/spdk/rpc.h 00:02:21.425 TEST_HEADER include/spdk/scheduler.h 00:02:21.425 TEST_HEADER include/spdk/scsi.h 00:02:21.425 TEST_HEADER include/spdk/scsi_spec.h 00:02:21.425 TEST_HEADER include/spdk/stdinc.h 00:02:21.425 TEST_HEADER include/spdk/sock.h 00:02:21.425 TEST_HEADER include/spdk/string.h 00:02:21.425 TEST_HEADER include/spdk/trace.h 00:02:21.425 TEST_HEADER include/spdk/thread.h 00:02:21.425 TEST_HEADER include/spdk/trace_parser.h 00:02:21.425 TEST_HEADER include/spdk/tree.h 00:02:21.425 TEST_HEADER include/spdk/util.h 00:02:21.425 TEST_HEADER include/spdk/ublk.h 00:02:21.425 TEST_HEADER include/spdk/uuid.h 00:02:21.425 TEST_HEADER include/spdk/version.h 00:02:21.425 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:21.425 TEST_HEADER include/spdk/vhost.h 00:02:21.425 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:21.425 TEST_HEADER include/spdk/xor.h 00:02:21.425 TEST_HEADER include/spdk/vmd.h 00:02:21.425 TEST_HEADER include/spdk/zipf.h 00:02:21.425 CXX test/cpp_headers/accel.o 00:02:21.425 CXX test/cpp_headers/barrier.o 00:02:21.425 CXX test/cpp_headers/bdev.o 00:02:21.425 CXX test/cpp_headers/assert.o 00:02:21.425 CXX test/cpp_headers/accel_module.o 00:02:21.425 CC app/spdk_tgt/spdk_tgt.o 00:02:21.425 CXX test/cpp_headers/bdev_module.o 00:02:21.425 CXX test/cpp_headers/base64.o 00:02:21.425 CXX test/cpp_headers/bit_array.o 00:02:21.425 CXX test/cpp_headers/bdev_zone.o 00:02:21.425 CXX test/cpp_headers/bit_pool.o 00:02:21.425 CXX test/cpp_headers/blobfs.o 00:02:21.425 CXX test/cpp_headers/blob_bdev.o 00:02:21.425 CXX test/cpp_headers/blobfs_bdev.o 00:02:21.425 CXX test/cpp_headers/blob.o 00:02:21.425 CXX test/cpp_headers/config.o 00:02:21.425 CXX test/cpp_headers/cpuset.o 00:02:21.425 CXX test/cpp_headers/conf.o 00:02:21.425 CXX test/cpp_headers/crc32.o 00:02:21.425 CXX test/cpp_headers/crc16.o 00:02:21.425 CXX test/cpp_headers/crc64.o 00:02:21.425 CXX test/cpp_headers/dma.o 00:02:21.425 CXX test/cpp_headers/dif.o 00:02:21.425 CXX test/cpp_headers/endian.o 00:02:21.425 CXX test/cpp_headers/env.o 00:02:21.425 CXX test/cpp_headers/env_dpdk.o 00:02:21.425 CXX test/cpp_headers/fd_group.o 00:02:21.425 CXX test/cpp_headers/event.o 00:02:21.425 CXX test/cpp_headers/fd.o 00:02:21.425 CXX test/cpp_headers/file.o 00:02:21.425 CXX test/cpp_headers/fsdev.o 00:02:21.425 CXX test/cpp_headers/fsdev_module.o 00:02:21.425 CXX test/cpp_headers/ftl.o 00:02:21.425 CXX test/cpp_headers/fuse_dispatcher.o 00:02:21.425 CXX test/cpp_headers/gpt_spec.o 00:02:21.425 CXX test/cpp_headers/hexlify.o 00:02:21.425 CXX test/cpp_headers/histogram_data.o 00:02:21.425 CXX test/cpp_headers/ioat.o 00:02:21.425 CXX test/cpp_headers/init.o 00:02:21.425 CXX test/cpp_headers/idxd_spec.o 00:02:21.425 CXX test/cpp_headers/idxd.o 00:02:21.425 CXX test/cpp_headers/ioat_spec.o 00:02:21.425 CXX test/cpp_headers/jsonrpc.o 00:02:21.425 CXX test/cpp_headers/iscsi_spec.o 00:02:21.425 CXX test/cpp_headers/json.o 00:02:21.425 CXX test/cpp_headers/keyring.o 00:02:21.425 CXX test/cpp_headers/keyring_module.o 00:02:21.425 CXX test/cpp_headers/log.o 00:02:21.425 CXX test/cpp_headers/lvol.o 00:02:21.425 CXX test/cpp_headers/likely.o 00:02:21.425 CXX test/cpp_headers/md5.o 00:02:21.425 CXX test/cpp_headers/memory.o 00:02:21.425 CXX test/cpp_headers/nbd.o 00:02:21.425 CXX test/cpp_headers/mmio.o 00:02:21.425 CXX test/cpp_headers/net.o 00:02:21.425 CXX test/cpp_headers/nvme.o 00:02:21.425 CXX test/cpp_headers/nvme_intel.o 00:02:21.425 CXX test/cpp_headers/notify.o 00:02:21.425 CXX test/cpp_headers/nvme_ocssd.o 00:02:21.425 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:21.425 CXX test/cpp_headers/nvme_zns.o 00:02:21.425 CXX test/cpp_headers/nvme_spec.o 00:02:21.425 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:21.425 CXX test/cpp_headers/nvmf_cmd.o 00:02:21.425 CXX test/cpp_headers/nvmf_spec.o 00:02:21.425 CXX test/cpp_headers/nvmf.o 00:02:21.425 CXX test/cpp_headers/nvmf_transport.o 00:02:21.425 CXX test/cpp_headers/opal.o 00:02:21.425 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:21.425 CC test/env/vtophys/vtophys.o 00:02:21.425 CC examples/ioat/verify/verify.o 00:02:21.425 CC examples/util/zipf/zipf.o 00:02:21.425 CC test/env/pci/pci_ut.o 00:02:21.425 CC test/app/jsoncat/jsoncat.o 00:02:21.425 CC examples/ioat/perf/perf.o 00:02:21.425 CC test/thread/poller_perf/poller_perf.o 00:02:21.425 CC test/app/histogram_perf/histogram_perf.o 00:02:21.425 CC test/env/memory/memory_ut.o 00:02:21.425 CC test/dma/test_dma/test_dma.o 00:02:21.425 CC app/fio/nvme/fio_plugin.o 00:02:21.425 CC test/app/stub/stub.o 00:02:21.425 CC app/fio/bdev/fio_plugin.o 00:02:21.725 CC test/app/bdev_svc/bdev_svc.o 00:02:21.725 LINK spdk_nvme_discover 00:02:21.725 LINK rpc_client_test 00:02:21.725 LINK interrupt_tgt 00:02:21.725 LINK spdk_lspci 00:02:22.002 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:22.002 CC test/env/mem_callbacks/mem_callbacks.o 00:02:22.002 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:22.002 LINK jsoncat 00:02:22.002 LINK nvmf_tgt 00:02:22.002 LINK env_dpdk_post_init 00:02:22.002 LINK iscsi_tgt 00:02:22.002 LINK histogram_perf 00:02:22.002 LINK zipf 00:02:22.002 LINK spdk_tgt 00:02:22.002 CXX test/cpp_headers/opal_spec.o 00:02:22.002 CXX test/cpp_headers/pci_ids.o 00:02:22.002 CXX test/cpp_headers/pipe.o 00:02:22.002 CXX test/cpp_headers/queue.o 00:02:22.002 CXX test/cpp_headers/reduce.o 00:02:22.002 CXX test/cpp_headers/rpc.o 00:02:22.002 CXX test/cpp_headers/scheduler.o 00:02:22.002 CXX test/cpp_headers/scsi.o 00:02:22.002 CXX test/cpp_headers/scsi_spec.o 00:02:22.002 CXX test/cpp_headers/sock.o 00:02:22.002 CXX test/cpp_headers/stdinc.o 00:02:22.002 CXX test/cpp_headers/string.o 00:02:22.002 CXX test/cpp_headers/thread.o 00:02:22.002 CXX test/cpp_headers/trace_parser.o 00:02:22.002 CXX test/cpp_headers/trace.o 00:02:22.002 CXX test/cpp_headers/tree.o 00:02:22.002 CXX test/cpp_headers/util.o 00:02:22.002 LINK stub 00:02:22.002 CXX test/cpp_headers/ublk.o 00:02:22.002 CXX test/cpp_headers/uuid.o 00:02:22.002 CXX test/cpp_headers/version.o 00:02:22.002 CXX test/cpp_headers/vfio_user_pci.o 00:02:22.002 LINK spdk_trace_record 00:02:22.002 LINK ioat_perf 00:02:22.002 CXX test/cpp_headers/vfio_user_spec.o 00:02:22.002 CXX test/cpp_headers/vhost.o 00:02:22.002 CXX test/cpp_headers/vmd.o 00:02:22.002 CXX test/cpp_headers/xor.o 00:02:22.002 LINK vtophys 00:02:22.002 CXX test/cpp_headers/zipf.o 00:02:22.311 LINK poller_perf 00:02:22.311 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:22.311 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:22.311 LINK verify 00:02:22.311 LINK bdev_svc 00:02:22.311 LINK pci_ut 00:02:22.311 LINK spdk_dd 00:02:22.311 LINK spdk_trace 00:02:22.597 LINK nvme_fuzz 00:02:22.597 LINK test_dma 00:02:22.597 LINK spdk_bdev 00:02:22.597 LINK spdk_nvme_identify 00:02:22.597 CC examples/vmd/lsvmd/lsvmd.o 00:02:22.597 CC examples/vmd/led/led.o 00:02:22.597 CC examples/idxd/perf/perf.o 00:02:22.597 CC examples/thread/thread/thread_ex.o 00:02:22.597 CC examples/sock/hello_world/hello_sock.o 00:02:22.597 LINK spdk_nvme 00:02:22.597 LINK vhost_fuzz 00:02:22.597 CC test/event/reactor/reactor.o 00:02:22.597 CC test/event/reactor_perf/reactor_perf.o 00:02:22.597 CC test/event/event_perf/event_perf.o 00:02:22.597 CC test/event/app_repeat/app_repeat.o 00:02:22.597 LINK spdk_nvme_perf 00:02:22.597 LINK lsvmd 00:02:22.597 CC test/event/scheduler/scheduler.o 00:02:22.597 LINK led 00:02:22.855 LINK spdk_top 00:02:22.855 LINK mem_callbacks 00:02:22.855 CC app/vhost/vhost.o 00:02:22.855 LINK reactor_perf 00:02:22.855 LINK hello_sock 00:02:22.855 LINK reactor 00:02:22.855 LINK thread 00:02:22.855 LINK event_perf 00:02:22.855 LINK app_repeat 00:02:22.855 LINK idxd_perf 00:02:22.855 LINK scheduler 00:02:22.855 CC test/nvme/connect_stress/connect_stress.o 00:02:22.855 CC test/nvme/fdp/fdp.o 00:02:22.855 CC test/nvme/simple_copy/simple_copy.o 00:02:22.855 CC test/nvme/boot_partition/boot_partition.o 00:02:22.855 CC test/nvme/overhead/overhead.o 00:02:22.855 CC test/nvme/sgl/sgl.o 00:02:22.855 CC test/nvme/cuse/cuse.o 00:02:22.855 CC test/nvme/aer/aer.o 00:02:22.855 CC test/nvme/reserve/reserve.o 00:02:23.114 CC test/nvme/e2edp/nvme_dp.o 00:02:23.114 CC test/nvme/err_injection/err_injection.o 00:02:23.114 CC test/accel/dif/dif.o 00:02:23.114 CC test/nvme/startup/startup.o 00:02:23.114 CC test/blobfs/mkfs/mkfs.o 00:02:23.114 CC test/nvme/compliance/nvme_compliance.o 00:02:23.114 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:23.114 CC test/nvme/reset/reset.o 00:02:23.114 CC test/nvme/fused_ordering/fused_ordering.o 00:02:23.114 LINK vhost 00:02:23.114 LINK memory_ut 00:02:23.114 CC test/lvol/esnap/esnap.o 00:02:23.114 LINK connect_stress 00:02:23.114 LINK startup 00:02:23.114 LINK boot_partition 00:02:23.114 LINK err_injection 00:02:23.114 LINK reserve 00:02:23.114 LINK doorbell_aers 00:02:23.114 LINK mkfs 00:02:23.114 LINK fused_ordering 00:02:23.114 LINK simple_copy 00:02:23.373 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:23.373 CC examples/nvme/arbitration/arbitration.o 00:02:23.373 LINK reset 00:02:23.373 LINK sgl 00:02:23.373 LINK nvme_dp 00:02:23.373 LINK aer 00:02:23.373 CC examples/nvme/hello_world/hello_world.o 00:02:23.373 CC examples/nvme/abort/abort.o 00:02:23.373 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:23.373 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:23.373 CC examples/nvme/reconnect/reconnect.o 00:02:23.373 CC examples/nvme/hotplug/hotplug.o 00:02:23.373 LINK overhead 00:02:23.373 LINK fdp 00:02:23.373 LINK nvme_compliance 00:02:23.373 CC examples/accel/perf/accel_perf.o 00:02:23.373 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:23.373 CC examples/blob/cli/blobcli.o 00:02:23.373 CC examples/blob/hello_world/hello_blob.o 00:02:23.373 LINK pmr_persistence 00:02:23.373 LINK cmb_copy 00:02:23.373 LINK hello_world 00:02:23.632 LINK hotplug 00:02:23.632 LINK iscsi_fuzz 00:02:23.632 LINK arbitration 00:02:23.632 LINK reconnect 00:02:23.632 LINK dif 00:02:23.632 LINK abort 00:02:23.632 LINK hello_blob 00:02:23.632 LINK hello_fsdev 00:02:23.632 LINK nvme_manage 00:02:23.632 LINK accel_perf 00:02:23.891 LINK blobcli 00:02:24.150 LINK cuse 00:02:24.150 CC test/bdev/bdevio/bdevio.o 00:02:24.150 CC examples/bdev/hello_world/hello_bdev.o 00:02:24.150 CC examples/bdev/bdevperf/bdevperf.o 00:02:24.410 LINK bdevio 00:02:24.410 LINK hello_bdev 00:02:24.978 LINK bdevperf 00:02:25.236 CC examples/nvmf/nvmf/nvmf.o 00:02:25.495 LINK nvmf 00:02:26.874 LINK esnap 00:02:26.874 00:02:26.874 real 0m56.264s 00:02:26.874 user 8m19.054s 00:02:26.874 sys 3m46.835s 00:02:26.874 16:03:58 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:26.874 16:03:58 make -- common/autotest_common.sh@10 -- $ set +x 00:02:26.874 ************************************ 00:02:26.874 END TEST make 00:02:26.874 ************************************ 00:02:26.874 16:03:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:26.874 16:03:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:26.874 16:03:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:26.874 16:03:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.874 16:03:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:26.875 16:03:58 -- pm/common@44 -- $ pid=1643625 00:02:26.875 16:03:58 -- pm/common@50 -- $ kill -TERM 1643625 00:02:26.875 16:03:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.875 16:03:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:26.875 16:03:58 -- pm/common@44 -- $ pid=1643627 00:02:26.875 16:03:58 -- pm/common@50 -- $ kill -TERM 1643627 00:02:26.875 16:03:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.875 16:03:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:26.875 16:03:58 -- pm/common@44 -- $ pid=1643629 00:02:26.875 16:03:58 -- pm/common@50 -- $ kill -TERM 1643629 00:02:26.875 16:03:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.875 16:03:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:26.875 16:03:58 -- pm/common@44 -- $ pid=1643653 00:02:26.875 16:03:58 -- pm/common@50 -- $ sudo -E kill -TERM 1643653 00:02:26.875 16:03:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:26.875 16:03:58 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:27.135 16:03:58 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:27.135 16:03:58 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:27.135 16:03:58 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:27.135 16:03:58 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:27.135 16:03:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:27.135 16:03:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:27.135 16:03:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:27.135 16:03:58 -- scripts/common.sh@336 -- # IFS=.-: 00:02:27.135 16:03:58 -- scripts/common.sh@336 -- # read -ra ver1 00:02:27.135 16:03:58 -- scripts/common.sh@337 -- # IFS=.-: 00:02:27.135 16:03:58 -- scripts/common.sh@337 -- # read -ra ver2 00:02:27.135 16:03:58 -- scripts/common.sh@338 -- # local 'op=<' 00:02:27.135 16:03:58 -- scripts/common.sh@340 -- # ver1_l=2 00:02:27.135 16:03:58 -- scripts/common.sh@341 -- # ver2_l=1 00:02:27.135 16:03:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:27.135 16:03:58 -- scripts/common.sh@344 -- # case "$op" in 00:02:27.135 16:03:58 -- scripts/common.sh@345 -- # : 1 00:02:27.135 16:03:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:27.135 16:03:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.135 16:03:58 -- scripts/common.sh@365 -- # decimal 1 00:02:27.135 16:03:58 -- scripts/common.sh@353 -- # local d=1 00:02:27.135 16:03:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:27.135 16:03:58 -- scripts/common.sh@355 -- # echo 1 00:02:27.135 16:03:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:27.136 16:03:58 -- scripts/common.sh@366 -- # decimal 2 00:02:27.136 16:03:58 -- scripts/common.sh@353 -- # local d=2 00:02:27.136 16:03:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:27.136 16:03:58 -- scripts/common.sh@355 -- # echo 2 00:02:27.136 16:03:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:27.136 16:03:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:27.136 16:03:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:27.136 16:03:58 -- scripts/common.sh@368 -- # return 0 00:02:27.136 16:03:58 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:27.136 16:03:58 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:27.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:27.136 --rc genhtml_branch_coverage=1 00:02:27.136 --rc genhtml_function_coverage=1 00:02:27.136 --rc genhtml_legend=1 00:02:27.136 --rc geninfo_all_blocks=1 00:02:27.136 --rc geninfo_unexecuted_blocks=1 00:02:27.136 00:02:27.136 ' 00:02:27.136 16:03:58 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:27.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:27.136 --rc genhtml_branch_coverage=1 00:02:27.136 --rc genhtml_function_coverage=1 00:02:27.136 --rc genhtml_legend=1 00:02:27.136 --rc geninfo_all_blocks=1 00:02:27.136 --rc geninfo_unexecuted_blocks=1 00:02:27.136 00:02:27.136 ' 00:02:27.136 16:03:58 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:27.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:27.136 --rc genhtml_branch_coverage=1 00:02:27.136 --rc genhtml_function_coverage=1 00:02:27.136 --rc genhtml_legend=1 00:02:27.136 --rc geninfo_all_blocks=1 00:02:27.136 --rc geninfo_unexecuted_blocks=1 00:02:27.136 00:02:27.136 ' 00:02:27.136 16:03:58 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:27.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:27.136 --rc genhtml_branch_coverage=1 00:02:27.136 --rc genhtml_function_coverage=1 00:02:27.136 --rc genhtml_legend=1 00:02:27.136 --rc geninfo_all_blocks=1 00:02:27.136 --rc geninfo_unexecuted_blocks=1 00:02:27.136 00:02:27.136 ' 00:02:27.136 16:03:58 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:27.136 16:03:58 -- nvmf/common.sh@7 -- # uname -s 00:02:27.136 16:03:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:27.136 16:03:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:27.136 16:03:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:27.136 16:03:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:27.136 16:03:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:27.136 16:03:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:27.136 16:03:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:27.136 16:03:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:27.136 16:03:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:27.136 16:03:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:27.136 16:03:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:02:27.136 16:03:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:02:27.136 16:03:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:27.136 16:03:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:27.136 16:03:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:27.136 16:03:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:27.136 16:03:58 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:27.136 16:03:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:27.136 16:03:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:27.136 16:03:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.136 16:03:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.136 16:03:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.136 16:03:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.136 16:03:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.136 16:03:58 -- paths/export.sh@5 -- # export PATH 00:02:27.136 16:03:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.136 16:03:58 -- nvmf/common.sh@51 -- # : 0 00:02:27.136 16:03:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:27.136 16:03:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:27.136 16:03:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:27.136 16:03:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:27.136 16:03:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:27.136 16:03:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:27.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:27.136 16:03:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:27.136 16:03:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:27.136 16:03:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:27.136 16:03:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:27.136 16:03:58 -- spdk/autotest.sh@32 -- # uname -s 00:02:27.136 16:03:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:27.136 16:03:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:27.136 16:03:58 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:27.136 16:03:58 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:27.136 16:03:58 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:27.136 16:03:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:27.136 16:03:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:27.136 16:03:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:27.136 16:03:58 -- spdk/autotest.sh@48 -- # udevadm_pid=1706673 00:02:27.136 16:03:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:27.136 16:03:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:27.136 16:03:58 -- pm/common@17 -- # local monitor 00:02:27.136 16:03:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.136 16:03:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.136 16:03:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.136 16:03:58 -- pm/common@21 -- # date +%s 00:02:27.136 16:03:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.136 16:03:58 -- pm/common@21 -- # date +%s 00:02:27.136 16:03:58 -- pm/common@25 -- # sleep 1 00:02:27.136 16:03:58 -- pm/common@21 -- # date +%s 00:02:27.136 16:03:58 -- pm/common@21 -- # date +%s 00:02:27.136 16:03:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732115038 00:02:27.136 16:03:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732115038 00:02:27.136 16:03:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732115038 00:02:27.136 16:03:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732115038 00:02:27.136 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732115038_collect-cpu-load.pm.log 00:02:27.136 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732115038_collect-vmstat.pm.log 00:02:27.136 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732115038_collect-cpu-temp.pm.log 00:02:27.136 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732115038_collect-bmc-pm.bmc.pm.log 00:02:28.076 16:03:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:28.076 16:03:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:28.076 16:03:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:28.076 16:03:59 -- common/autotest_common.sh@10 -- # set +x 00:02:28.335 16:03:59 -- spdk/autotest.sh@59 -- # create_test_list 00:02:28.335 16:03:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:28.335 16:03:59 -- common/autotest_common.sh@10 -- # set +x 00:02:28.335 16:03:59 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:28.335 16:03:59 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.335 16:03:59 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.335 16:03:59 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:28.335 16:03:59 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.335 16:03:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:28.335 16:03:59 -- common/autotest_common.sh@1457 -- # uname 00:02:28.335 16:03:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:28.335 16:03:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:28.335 16:03:59 -- common/autotest_common.sh@1477 -- # uname 00:02:28.335 16:03:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:28.335 16:03:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:28.335 16:03:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:28.335 lcov: LCOV version 1.15 00:02:28.335 16:03:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:40.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:40.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:52.750 16:04:23 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:52.750 16:04:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:52.750 16:04:23 -- common/autotest_common.sh@10 -- # set +x 00:02:52.750 16:04:23 -- spdk/autotest.sh@78 -- # rm -f 00:02:52.750 16:04:23 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.037 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:56.037 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:56.037 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:56.037 16:04:27 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:56.037 16:04:27 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:56.037 16:04:27 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:56.037 16:04:27 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:56.037 16:04:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:56.037 16:04:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:56.037 16:04:27 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:56.037 16:04:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:56.037 16:04:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:56.037 16:04:27 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:56.037 16:04:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:56.037 16:04:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:56.037 16:04:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:56.037 16:04:27 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:56.038 16:04:27 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:56.038 No valid GPT data, bailing 00:02:56.038 16:04:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:56.038 16:04:27 -- scripts/common.sh@394 -- # pt= 00:02:56.038 16:04:27 -- scripts/common.sh@395 -- # return 1 00:02:56.038 16:04:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:56.038 1+0 records in 00:02:56.038 1+0 records out 00:02:56.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405031 s, 259 MB/s 00:02:56.038 16:04:27 -- spdk/autotest.sh@105 -- # sync 00:02:56.038 16:04:27 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:56.038 16:04:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:56.038 16:04:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:02.609 16:04:32 -- spdk/autotest.sh@111 -- # uname -s 00:03:02.609 16:04:32 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:02.609 16:04:32 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:02.609 16:04:32 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:04.516 Hugepages 00:03:04.516 node hugesize free / total 00:03:04.516 node0 1048576kB 0 / 0 00:03:04.516 node0 2048kB 0 / 0 00:03:04.516 node1 1048576kB 0 / 0 00:03:04.516 node1 2048kB 0 / 0 00:03:04.516 00:03:04.516 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:04.516 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:04.516 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:04.516 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:04.516 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:04.516 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:04.516 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:04.516 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:04.516 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:04.516 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:04.516 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:04.516 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:04.516 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:04.516 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:04.516 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:04.516 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:04.516 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:04.516 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:04.516 16:04:35 -- spdk/autotest.sh@117 -- # uname -s 00:03:04.516 16:04:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:04.516 16:04:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:04.516 16:04:35 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.807 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:07.807 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:07.807 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:07.807 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:07.807 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:07.807 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:07.807 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:07.807 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:07.807 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:07.807 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:07.808 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:07.808 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:07.808 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:07.808 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:07.808 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:07.808 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:09.191 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:09.191 16:04:40 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:10.130 16:04:41 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:10.130 16:04:41 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:10.130 16:04:41 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:10.130 16:04:41 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:10.130 16:04:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:10.130 16:04:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:10.130 16:04:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:10.130 16:04:41 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:10.130 16:04:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:10.130 16:04:41 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:10.130 16:04:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:10.130 16:04:41 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.419 Waiting for block devices as requested 00:03:13.419 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:13.419 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:13.419 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:13.419 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:13.419 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:13.419 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:13.419 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:13.419 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:13.678 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:13.678 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:13.678 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:13.678 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:13.937 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:13.937 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:13.937 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:14.196 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:14.196 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:14.196 16:04:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:14.196 16:04:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:14.196 16:04:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:14.196 16:04:45 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:14.196 16:04:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:14.196 16:04:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:14.196 16:04:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:14.196 16:04:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:14.196 16:04:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:14.196 16:04:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:14.196 16:04:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:14.196 16:04:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:14.196 16:04:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:14.196 16:04:45 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:14.196 16:04:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:14.196 16:04:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:14.196 16:04:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:14.196 16:04:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:14.196 16:04:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:14.196 16:04:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:14.196 16:04:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:14.196 16:04:45 -- common/autotest_common.sh@1543 -- # continue 00:03:14.196 16:04:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:14.196 16:04:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:14.196 16:04:45 -- common/autotest_common.sh@10 -- # set +x 00:03:14.454 16:04:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:14.454 16:04:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:14.454 16:04:45 -- common/autotest_common.sh@10 -- # set +x 00:03:14.454 16:04:45 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.744 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:17.744 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:18.680 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:18.939 16:04:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:18.939 16:04:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:18.939 16:04:49 -- common/autotest_common.sh@10 -- # set +x 00:03:18.939 16:04:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:18.939 16:04:49 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:18.939 16:04:49 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:18.939 16:04:49 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:18.939 16:04:49 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:18.939 16:04:49 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:18.939 16:04:49 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:18.939 16:04:49 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:18.939 16:04:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:18.939 16:04:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:18.939 16:04:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:18.939 16:04:49 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:18.939 16:04:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:18.939 16:04:50 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:18.939 16:04:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:18.939 16:04:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:18.939 16:04:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:18.939 16:04:50 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:18.939 16:04:50 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:18.939 16:04:50 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:18.939 16:04:50 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:18.939 16:04:50 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:18.939 16:04:50 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:18.939 16:04:50 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1720924 00:03:18.940 16:04:50 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:18.940 16:04:50 -- common/autotest_common.sh@1585 -- # waitforlisten 1720924 00:03:18.940 16:04:50 -- common/autotest_common.sh@835 -- # '[' -z 1720924 ']' 00:03:18.940 16:04:50 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:18.940 16:04:50 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:18.940 16:04:50 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:18.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:18.940 16:04:50 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:18.940 16:04:50 -- common/autotest_common.sh@10 -- # set +x 00:03:18.940 [2024-11-20 16:04:50.098001] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:03:18.940 [2024-11-20 16:04:50.098047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720924 ] 00:03:19.198 [2024-11-20 16:04:50.172226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:19.199 [2024-11-20 16:04:50.215289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:19.458 16:04:50 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:19.458 16:04:50 -- common/autotest_common.sh@868 -- # return 0 00:03:19.458 16:04:50 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:19.458 16:04:50 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:19.458 16:04:50 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:22.747 nvme0n1 00:03:22.747 16:04:53 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:22.747 [2024-11-20 16:04:53.631004] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:22.747 request: 00:03:22.747 { 00:03:22.747 "nvme_ctrlr_name": "nvme0", 00:03:22.747 "password": "test", 00:03:22.747 "method": "bdev_nvme_opal_revert", 00:03:22.747 "req_id": 1 00:03:22.747 } 00:03:22.747 Got JSON-RPC error response 00:03:22.747 response: 00:03:22.747 { 00:03:22.747 "code": -32602, 00:03:22.747 "message": "Invalid parameters" 00:03:22.747 } 00:03:22.747 16:04:53 -- common/autotest_common.sh@1591 -- # true 00:03:22.747 16:04:53 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:22.747 16:04:53 -- common/autotest_common.sh@1595 -- # killprocess 1720924 00:03:22.747 16:04:53 -- common/autotest_common.sh@954 -- # '[' -z 1720924 ']' 00:03:22.747 16:04:53 -- common/autotest_common.sh@958 -- # kill -0 1720924 00:03:22.747 16:04:53 -- common/autotest_common.sh@959 -- # uname 00:03:22.747 16:04:53 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:22.747 16:04:53 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1720924 00:03:22.747 16:04:53 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:22.747 16:04:53 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:22.747 16:04:53 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1720924' 00:03:22.747 killing process with pid 1720924 00:03:22.747 16:04:53 -- common/autotest_common.sh@973 -- # kill 1720924 00:03:22.747 16:04:53 -- common/autotest_common.sh@978 -- # wait 1720924 00:03:24.651 16:04:55 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:24.651 16:04:55 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:24.651 16:04:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:24.651 16:04:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:24.651 16:04:55 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:24.651 16:04:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:24.651 16:04:55 -- common/autotest_common.sh@10 -- # set +x 00:03:24.651 16:04:55 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:24.651 16:04:55 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:24.651 16:04:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:24.651 16:04:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:24.651 16:04:55 -- common/autotest_common.sh@10 -- # set +x 00:03:24.651 ************************************ 00:03:24.651 START TEST env 00:03:24.651 ************************************ 00:03:24.651 16:04:55 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:24.910 * Looking for test storage... 00:03:24.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:24.910 16:04:55 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:24.910 16:04:55 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:24.910 16:04:55 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:24.910 16:04:55 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:24.910 16:04:55 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:24.910 16:04:55 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:24.910 16:04:55 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:24.910 16:04:55 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:24.910 16:04:55 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:24.910 16:04:55 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:24.910 16:04:55 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:24.910 16:04:55 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:24.910 16:04:55 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:24.910 16:04:55 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:24.910 16:04:55 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:24.910 16:04:55 env -- scripts/common.sh@344 -- # case "$op" in 00:03:24.910 16:04:55 env -- scripts/common.sh@345 -- # : 1 00:03:24.910 16:04:55 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:24.910 16:04:55 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:24.910 16:04:55 env -- scripts/common.sh@365 -- # decimal 1 00:03:24.910 16:04:55 env -- scripts/common.sh@353 -- # local d=1 00:03:24.910 16:04:55 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:24.910 16:04:55 env -- scripts/common.sh@355 -- # echo 1 00:03:24.910 16:04:55 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:24.910 16:04:56 env -- scripts/common.sh@366 -- # decimal 2 00:03:24.910 16:04:56 env -- scripts/common.sh@353 -- # local d=2 00:03:24.910 16:04:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:24.910 16:04:56 env -- scripts/common.sh@355 -- # echo 2 00:03:24.910 16:04:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:24.910 16:04:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:24.910 16:04:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:24.910 16:04:56 env -- scripts/common.sh@368 -- # return 0 00:03:24.910 16:04:56 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:24.910 16:04:56 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.910 --rc genhtml_branch_coverage=1 00:03:24.910 --rc genhtml_function_coverage=1 00:03:24.910 --rc genhtml_legend=1 00:03:24.910 --rc geninfo_all_blocks=1 00:03:24.910 --rc geninfo_unexecuted_blocks=1 00:03:24.910 00:03:24.910 ' 00:03:24.910 16:04:56 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.910 --rc genhtml_branch_coverage=1 00:03:24.910 --rc genhtml_function_coverage=1 00:03:24.910 --rc genhtml_legend=1 00:03:24.910 --rc geninfo_all_blocks=1 00:03:24.910 --rc geninfo_unexecuted_blocks=1 00:03:24.910 00:03:24.910 ' 00:03:24.910 16:04:56 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.910 --rc genhtml_branch_coverage=1 00:03:24.910 --rc genhtml_function_coverage=1 00:03:24.910 --rc genhtml_legend=1 00:03:24.910 --rc geninfo_all_blocks=1 00:03:24.910 --rc geninfo_unexecuted_blocks=1 00:03:24.910 00:03:24.910 ' 00:03:24.910 16:04:56 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.910 --rc genhtml_branch_coverage=1 00:03:24.910 --rc genhtml_function_coverage=1 00:03:24.910 --rc genhtml_legend=1 00:03:24.910 --rc geninfo_all_blocks=1 00:03:24.910 --rc geninfo_unexecuted_blocks=1 00:03:24.910 00:03:24.910 ' 00:03:24.910 16:04:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:24.910 16:04:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:24.910 16:04:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:24.910 16:04:56 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.910 ************************************ 00:03:24.910 START TEST env_memory 00:03:24.910 ************************************ 00:03:24.910 16:04:56 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:24.910 00:03:24.910 00:03:24.910 CUnit - A unit testing framework for C - Version 2.1-3 00:03:24.910 http://cunit.sourceforge.net/ 00:03:24.910 00:03:24.910 00:03:24.910 Suite: memory 00:03:24.910 Test: alloc and free memory map ...[2024-11-20 16:04:56.089063] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:24.910 passed 00:03:24.910 Test: mem map translation ...[2024-11-20 16:04:56.107844] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:24.910 [2024-11-20 16:04:56.107858] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:24.910 [2024-11-20 16:04:56.107893] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:24.910 [2024-11-20 16:04:56.107899] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:24.910 passed 00:03:25.169 Test: mem map registration ...[2024-11-20 16:04:56.143669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:25.169 [2024-11-20 16:04:56.143685] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:25.169 passed 00:03:25.169 Test: mem map adjacent registrations ...passed 00:03:25.169 00:03:25.169 Run Summary: Type Total Ran Passed Failed Inactive 00:03:25.170 suites 1 1 n/a 0 0 00:03:25.170 tests 4 4 4 0 0 00:03:25.170 asserts 152 152 152 0 n/a 00:03:25.170 00:03:25.170 Elapsed time = 0.135 seconds 00:03:25.170 00:03:25.170 real 0m0.148s 00:03:25.170 user 0m0.140s 00:03:25.170 sys 0m0.007s 00:03:25.170 16:04:56 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:25.170 16:04:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:25.170 ************************************ 00:03:25.170 END TEST env_memory 00:03:25.170 ************************************ 00:03:25.170 16:04:56 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:25.170 16:04:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.170 16:04:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.170 16:04:56 env -- common/autotest_common.sh@10 -- # set +x 00:03:25.170 ************************************ 00:03:25.170 START TEST env_vtophys 00:03:25.170 ************************************ 00:03:25.170 16:04:56 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:25.170 EAL: lib.eal log level changed from notice to debug 00:03:25.170 EAL: Detected lcore 0 as core 0 on socket 0 00:03:25.170 EAL: Detected lcore 1 as core 1 on socket 0 00:03:25.170 EAL: Detected lcore 2 as core 2 on socket 0 00:03:25.170 EAL: Detected lcore 3 as core 3 on socket 0 00:03:25.170 EAL: Detected lcore 4 as core 4 on socket 0 00:03:25.170 EAL: Detected lcore 5 as core 5 on socket 0 00:03:25.170 EAL: Detected lcore 6 as core 6 on socket 0 00:03:25.170 EAL: Detected lcore 7 as core 8 on socket 0 00:03:25.170 EAL: Detected lcore 8 as core 9 on socket 0 00:03:25.170 EAL: Detected lcore 9 as core 10 on socket 0 00:03:25.170 EAL: Detected lcore 10 as core 11 on socket 0 00:03:25.170 EAL: Detected lcore 11 as core 12 on socket 0 00:03:25.170 EAL: Detected lcore 12 as core 13 on socket 0 00:03:25.170 EAL: Detected lcore 13 as core 16 on socket 0 00:03:25.170 EAL: Detected lcore 14 as core 17 on socket 0 00:03:25.170 EAL: Detected lcore 15 as core 18 on socket 0 00:03:25.170 EAL: Detected lcore 16 as core 19 on socket 0 00:03:25.170 EAL: Detected lcore 17 as core 20 on socket 0 00:03:25.170 EAL: Detected lcore 18 as core 21 on socket 0 00:03:25.170 EAL: Detected lcore 19 as core 25 on socket 0 00:03:25.170 EAL: Detected lcore 20 as core 26 on socket 0 00:03:25.170 EAL: Detected lcore 21 as core 27 on socket 0 00:03:25.170 EAL: Detected lcore 22 as core 28 on socket 0 00:03:25.170 EAL: Detected lcore 23 as core 29 on socket 0 00:03:25.170 EAL: Detected lcore 24 as core 0 on socket 1 00:03:25.170 EAL: Detected lcore 25 as core 1 on socket 1 00:03:25.170 EAL: Detected lcore 26 as core 2 on socket 1 00:03:25.170 EAL: Detected lcore 27 as core 3 on socket 1 00:03:25.170 EAL: Detected lcore 28 as core 4 on socket 1 00:03:25.170 EAL: Detected lcore 29 as core 5 on socket 1 00:03:25.170 EAL: Detected lcore 30 as core 6 on socket 1 00:03:25.170 EAL: Detected lcore 31 as core 8 on socket 1 00:03:25.170 EAL: Detected lcore 32 as core 10 on socket 1 00:03:25.170 EAL: Detected lcore 33 as core 11 on socket 1 00:03:25.170 EAL: Detected lcore 34 as core 12 on socket 1 00:03:25.170 EAL: Detected lcore 35 as core 13 on socket 1 00:03:25.170 EAL: Detected lcore 36 as core 16 on socket 1 00:03:25.170 EAL: Detected lcore 37 as core 17 on socket 1 00:03:25.170 EAL: Detected lcore 38 as core 18 on socket 1 00:03:25.170 EAL: Detected lcore 39 as core 19 on socket 1 00:03:25.170 EAL: Detected lcore 40 as core 20 on socket 1 00:03:25.170 EAL: Detected lcore 41 as core 21 on socket 1 00:03:25.170 EAL: Detected lcore 42 as core 24 on socket 1 00:03:25.170 EAL: Detected lcore 43 as core 25 on socket 1 00:03:25.170 EAL: Detected lcore 44 as core 26 on socket 1 00:03:25.170 EAL: Detected lcore 45 as core 27 on socket 1 00:03:25.170 EAL: Detected lcore 46 as core 28 on socket 1 00:03:25.170 EAL: Detected lcore 47 as core 29 on socket 1 00:03:25.170 EAL: Detected lcore 48 as core 0 on socket 0 00:03:25.170 EAL: Detected lcore 49 as core 1 on socket 0 00:03:25.170 EAL: Detected lcore 50 as core 2 on socket 0 00:03:25.170 EAL: Detected lcore 51 as core 3 on socket 0 00:03:25.170 EAL: Detected lcore 52 as core 4 on socket 0 00:03:25.170 EAL: Detected lcore 53 as core 5 on socket 0 00:03:25.170 EAL: Detected lcore 54 as core 6 on socket 0 00:03:25.170 EAL: Detected lcore 55 as core 8 on socket 0 00:03:25.170 EAL: Detected lcore 56 as core 9 on socket 0 00:03:25.170 EAL: Detected lcore 57 as core 10 on socket 0 00:03:25.170 EAL: Detected lcore 58 as core 11 on socket 0 00:03:25.170 EAL: Detected lcore 59 as core 12 on socket 0 00:03:25.170 EAL: Detected lcore 60 as core 13 on socket 0 00:03:25.170 EAL: Detected lcore 61 as core 16 on socket 0 00:03:25.170 EAL: Detected lcore 62 as core 17 on socket 0 00:03:25.170 EAL: Detected lcore 63 as core 18 on socket 0 00:03:25.170 EAL: Detected lcore 64 as core 19 on socket 0 00:03:25.170 EAL: Detected lcore 65 as core 20 on socket 0 00:03:25.170 EAL: Detected lcore 66 as core 21 on socket 0 00:03:25.170 EAL: Detected lcore 67 as core 25 on socket 0 00:03:25.170 EAL: Detected lcore 68 as core 26 on socket 0 00:03:25.170 EAL: Detected lcore 69 as core 27 on socket 0 00:03:25.170 EAL: Detected lcore 70 as core 28 on socket 0 00:03:25.170 EAL: Detected lcore 71 as core 29 on socket 0 00:03:25.170 EAL: Detected lcore 72 as core 0 on socket 1 00:03:25.170 EAL: Detected lcore 73 as core 1 on socket 1 00:03:25.170 EAL: Detected lcore 74 as core 2 on socket 1 00:03:25.170 EAL: Detected lcore 75 as core 3 on socket 1 00:03:25.170 EAL: Detected lcore 76 as core 4 on socket 1 00:03:25.170 EAL: Detected lcore 77 as core 5 on socket 1 00:03:25.170 EAL: Detected lcore 78 as core 6 on socket 1 00:03:25.170 EAL: Detected lcore 79 as core 8 on socket 1 00:03:25.170 EAL: Detected lcore 80 as core 10 on socket 1 00:03:25.170 EAL: Detected lcore 81 as core 11 on socket 1 00:03:25.170 EAL: Detected lcore 82 as core 12 on socket 1 00:03:25.170 EAL: Detected lcore 83 as core 13 on socket 1 00:03:25.170 EAL: Detected lcore 84 as core 16 on socket 1 00:03:25.170 EAL: Detected lcore 85 as core 17 on socket 1 00:03:25.170 EAL: Detected lcore 86 as core 18 on socket 1 00:03:25.170 EAL: Detected lcore 87 as core 19 on socket 1 00:03:25.170 EAL: Detected lcore 88 as core 20 on socket 1 00:03:25.170 EAL: Detected lcore 89 as core 21 on socket 1 00:03:25.170 EAL: Detected lcore 90 as core 24 on socket 1 00:03:25.170 EAL: Detected lcore 91 as core 25 on socket 1 00:03:25.170 EAL: Detected lcore 92 as core 26 on socket 1 00:03:25.170 EAL: Detected lcore 93 as core 27 on socket 1 00:03:25.170 EAL: Detected lcore 94 as core 28 on socket 1 00:03:25.170 EAL: Detected lcore 95 as core 29 on socket 1 00:03:25.170 EAL: Maximum logical cores by configuration: 128 00:03:25.170 EAL: Detected CPU lcores: 96 00:03:25.170 EAL: Detected NUMA nodes: 2 00:03:25.170 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:25.170 EAL: Detected shared linkage of DPDK 00:03:25.170 EAL: No shared files mode enabled, IPC will be disabled 00:03:25.170 EAL: Bus pci wants IOVA as 'DC' 00:03:25.170 EAL: Buses did not request a specific IOVA mode. 00:03:25.170 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:25.170 EAL: Selected IOVA mode 'VA' 00:03:25.170 EAL: Probing VFIO support... 00:03:25.170 EAL: IOMMU type 1 (Type 1) is supported 00:03:25.170 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:25.170 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:25.170 EAL: VFIO support initialized 00:03:25.170 EAL: Ask a virtual area of 0x2e000 bytes 00:03:25.170 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:25.170 EAL: Setting up physically contiguous memory... 00:03:25.170 EAL: Setting maximum number of open files to 524288 00:03:25.170 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:25.170 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:25.170 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:25.170 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.170 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:25.170 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.170 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.170 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:25.170 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:25.170 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.170 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:25.170 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.170 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.170 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:25.170 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:25.170 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.170 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:25.170 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.170 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.170 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:25.170 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:25.170 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.170 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:25.170 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.170 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.170 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:25.170 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:25.170 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:25.170 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.170 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:25.170 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.170 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.170 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:25.170 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:25.170 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.170 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:25.170 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.170 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.170 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:25.170 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:25.170 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.170 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:25.170 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.170 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.170 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:25.170 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:25.170 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.170 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:25.170 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.170 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.170 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:25.170 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:25.170 EAL: Hugepages will be freed exactly as allocated. 00:03:25.170 EAL: No shared files mode enabled, IPC is disabled 00:03:25.170 EAL: No shared files mode enabled, IPC is disabled 00:03:25.170 EAL: TSC frequency is ~2100000 KHz 00:03:25.170 EAL: Main lcore 0 is ready (tid=7f8e03b05a00;cpuset=[0]) 00:03:25.170 EAL: Trying to obtain current memory policy. 00:03:25.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.170 EAL: Restoring previous memory policy: 0 00:03:25.170 EAL: request: mp_malloc_sync 00:03:25.170 EAL: No shared files mode enabled, IPC is disabled 00:03:25.170 EAL: Heap on socket 0 was expanded by 2MB 00:03:25.170 EAL: No shared files mode enabled, IPC is disabled 00:03:25.170 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:25.170 EAL: Mem event callback 'spdk:(nil)' registered 00:03:25.170 00:03:25.170 00:03:25.170 CUnit - A unit testing framework for C - Version 2.1-3 00:03:25.170 http://cunit.sourceforge.net/ 00:03:25.170 00:03:25.170 00:03:25.170 Suite: components_suite 00:03:25.170 Test: vtophys_malloc_test ...passed 00:03:25.170 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:25.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.170 EAL: Restoring previous memory policy: 4 00:03:25.170 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.170 EAL: request: mp_malloc_sync 00:03:25.170 EAL: No shared files mode enabled, IPC is disabled 00:03:25.170 EAL: Heap on socket 0 was expanded by 4MB 00:03:25.170 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.170 EAL: request: mp_malloc_sync 00:03:25.170 EAL: No shared files mode enabled, IPC is disabled 00:03:25.170 EAL: Heap on socket 0 was shrunk by 4MB 00:03:25.170 EAL: Trying to obtain current memory policy. 00:03:25.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.170 EAL: Restoring previous memory policy: 4 00:03:25.170 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.170 EAL: request: mp_malloc_sync 00:03:25.170 EAL: No shared files mode enabled, IPC is disabled 00:03:25.170 EAL: Heap on socket 0 was expanded by 6MB 00:03:25.170 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.170 EAL: request: mp_malloc_sync 00:03:25.170 EAL: No shared files mode enabled, IPC is disabled 00:03:25.170 EAL: Heap on socket 0 was shrunk by 6MB 00:03:25.170 EAL: Trying to obtain current memory policy. 00:03:25.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.170 EAL: Restoring previous memory policy: 4 00:03:25.170 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.170 EAL: request: mp_malloc_sync 00:03:25.171 EAL: No shared files mode enabled, IPC is disabled 00:03:25.171 EAL: Heap on socket 0 was expanded by 10MB 00:03:25.171 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.171 EAL: request: mp_malloc_sync 00:03:25.171 EAL: No shared files mode enabled, IPC is disabled 00:03:25.171 EAL: Heap on socket 0 was shrunk by 10MB 00:03:25.171 EAL: Trying to obtain current memory policy. 00:03:25.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.171 EAL: Restoring previous memory policy: 4 00:03:25.171 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.171 EAL: request: mp_malloc_sync 00:03:25.171 EAL: No shared files mode enabled, IPC is disabled 00:03:25.171 EAL: Heap on socket 0 was expanded by 18MB 00:03:25.171 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.171 EAL: request: mp_malloc_sync 00:03:25.171 EAL: No shared files mode enabled, IPC is disabled 00:03:25.171 EAL: Heap on socket 0 was shrunk by 18MB 00:03:25.171 EAL: Trying to obtain current memory policy. 00:03:25.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.171 EAL: Restoring previous memory policy: 4 00:03:25.171 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.171 EAL: request: mp_malloc_sync 00:03:25.171 EAL: No shared files mode enabled, IPC is disabled 00:03:25.171 EAL: Heap on socket 0 was expanded by 34MB 00:03:25.171 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.171 EAL: request: mp_malloc_sync 00:03:25.171 EAL: No shared files mode enabled, IPC is disabled 00:03:25.171 EAL: Heap on socket 0 was shrunk by 34MB 00:03:25.171 EAL: Trying to obtain current memory policy. 00:03:25.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.171 EAL: Restoring previous memory policy: 4 00:03:25.171 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.171 EAL: request: mp_malloc_sync 00:03:25.171 EAL: No shared files mode enabled, IPC is disabled 00:03:25.171 EAL: Heap on socket 0 was expanded by 66MB 00:03:25.171 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.430 EAL: request: mp_malloc_sync 00:03:25.430 EAL: No shared files mode enabled, IPC is disabled 00:03:25.430 EAL: Heap on socket 0 was shrunk by 66MB 00:03:25.430 EAL: Trying to obtain current memory policy. 00:03:25.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.430 EAL: Restoring previous memory policy: 4 00:03:25.430 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.430 EAL: request: mp_malloc_sync 00:03:25.430 EAL: No shared files mode enabled, IPC is disabled 00:03:25.430 EAL: Heap on socket 0 was expanded by 130MB 00:03:25.430 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.430 EAL: request: mp_malloc_sync 00:03:25.430 EAL: No shared files mode enabled, IPC is disabled 00:03:25.430 EAL: Heap on socket 0 was shrunk by 130MB 00:03:25.430 EAL: Trying to obtain current memory policy. 00:03:25.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.430 EAL: Restoring previous memory policy: 4 00:03:25.430 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.430 EAL: request: mp_malloc_sync 00:03:25.430 EAL: No shared files mode enabled, IPC is disabled 00:03:25.430 EAL: Heap on socket 0 was expanded by 258MB 00:03:25.430 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.430 EAL: request: mp_malloc_sync 00:03:25.430 EAL: No shared files mode enabled, IPC is disabled 00:03:25.430 EAL: Heap on socket 0 was shrunk by 258MB 00:03:25.430 EAL: Trying to obtain current memory policy. 00:03:25.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.689 EAL: Restoring previous memory policy: 4 00:03:25.689 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.689 EAL: request: mp_malloc_sync 00:03:25.689 EAL: No shared files mode enabled, IPC is disabled 00:03:25.689 EAL: Heap on socket 0 was expanded by 514MB 00:03:25.689 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.689 EAL: request: mp_malloc_sync 00:03:25.689 EAL: No shared files mode enabled, IPC is disabled 00:03:25.689 EAL: Heap on socket 0 was shrunk by 514MB 00:03:25.689 EAL: Trying to obtain current memory policy. 00:03:25.689 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.949 EAL: Restoring previous memory policy: 4 00:03:25.950 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.950 EAL: request: mp_malloc_sync 00:03:25.950 EAL: No shared files mode enabled, IPC is disabled 00:03:25.950 EAL: Heap on socket 0 was expanded by 1026MB 00:03:26.209 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.209 EAL: request: mp_malloc_sync 00:03:26.209 EAL: No shared files mode enabled, IPC is disabled 00:03:26.209 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:26.209 passed 00:03:26.209 00:03:26.209 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.209 suites 1 1 n/a 0 0 00:03:26.209 tests 2 2 2 0 0 00:03:26.209 asserts 497 497 497 0 n/a 00:03:26.209 00:03:26.209 Elapsed time = 0.964 seconds 00:03:26.209 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.209 EAL: request: mp_malloc_sync 00:03:26.209 EAL: No shared files mode enabled, IPC is disabled 00:03:26.209 EAL: Heap on socket 0 was shrunk by 2MB 00:03:26.209 EAL: No shared files mode enabled, IPC is disabled 00:03:26.209 EAL: No shared files mode enabled, IPC is disabled 00:03:26.209 EAL: No shared files mode enabled, IPC is disabled 00:03:26.209 00:03:26.209 real 0m1.097s 00:03:26.209 user 0m0.646s 00:03:26.209 sys 0m0.420s 00:03:26.209 16:04:57 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.209 16:04:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:26.209 ************************************ 00:03:26.209 END TEST env_vtophys 00:03:26.209 ************************************ 00:03:26.209 16:04:57 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:26.209 16:04:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.209 16:04:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.209 16:04:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.209 ************************************ 00:03:26.209 START TEST env_pci 00:03:26.209 ************************************ 00:03:26.209 16:04:57 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:26.535 00:03:26.535 00:03:26.535 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.535 http://cunit.sourceforge.net/ 00:03:26.535 00:03:26.535 00:03:26.535 Suite: pci 00:03:26.535 Test: pci_hook ...[2024-11-20 16:04:57.449011] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1722235 has claimed it 00:03:26.535 EAL: Cannot find device (10000:00:01.0) 00:03:26.535 EAL: Failed to attach device on primary process 00:03:26.535 passed 00:03:26.535 00:03:26.535 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.535 suites 1 1 n/a 0 0 00:03:26.535 tests 1 1 1 0 0 00:03:26.535 asserts 25 25 25 0 n/a 00:03:26.535 00:03:26.535 Elapsed time = 0.030 seconds 00:03:26.535 00:03:26.535 real 0m0.050s 00:03:26.535 user 0m0.014s 00:03:26.535 sys 0m0.036s 00:03:26.535 16:04:57 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.535 16:04:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:26.535 ************************************ 00:03:26.535 END TEST env_pci 00:03:26.535 ************************************ 00:03:26.535 16:04:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:26.535 16:04:57 env -- env/env.sh@15 -- # uname 00:03:26.535 16:04:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:26.535 16:04:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:26.535 16:04:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:26.535 16:04:57 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:26.535 16:04:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.535 16:04:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.535 ************************************ 00:03:26.535 START TEST env_dpdk_post_init 00:03:26.535 ************************************ 00:03:26.535 16:04:57 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:26.535 EAL: Detected CPU lcores: 96 00:03:26.535 EAL: Detected NUMA nodes: 2 00:03:26.535 EAL: Detected shared linkage of DPDK 00:03:26.535 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:26.535 EAL: Selected IOVA mode 'VA' 00:03:26.535 EAL: VFIO support initialized 00:03:26.535 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:26.535 EAL: Using IOMMU type 1 (Type 1) 00:03:26.535 EAL: Ignore mapping IO port bar(1) 00:03:26.535 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:26.535 EAL: Ignore mapping IO port bar(1) 00:03:26.535 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:26.535 EAL: Ignore mapping IO port bar(1) 00:03:26.535 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:26.535 EAL: Ignore mapping IO port bar(1) 00:03:26.535 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:26.832 EAL: Ignore mapping IO port bar(1) 00:03:26.832 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:26.832 EAL: Ignore mapping IO port bar(1) 00:03:26.832 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:26.832 EAL: Ignore mapping IO port bar(1) 00:03:26.832 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:26.832 EAL: Ignore mapping IO port bar(1) 00:03:26.832 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:27.410 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:27.410 EAL: Ignore mapping IO port bar(1) 00:03:27.410 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:27.410 EAL: Ignore mapping IO port bar(1) 00:03:27.410 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:27.410 EAL: Ignore mapping IO port bar(1) 00:03:27.410 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:27.410 EAL: Ignore mapping IO port bar(1) 00:03:27.410 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:27.410 EAL: Ignore mapping IO port bar(1) 00:03:27.410 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:27.410 EAL: Ignore mapping IO port bar(1) 00:03:27.410 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:27.410 EAL: Ignore mapping IO port bar(1) 00:03:27.410 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:27.410 EAL: Ignore mapping IO port bar(1) 00:03:27.410 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:31.599 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:31.599 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:31.599 Starting DPDK initialization... 00:03:31.599 Starting SPDK post initialization... 00:03:31.599 SPDK NVMe probe 00:03:31.599 Attaching to 0000:5e:00.0 00:03:31.599 Attached to 0000:5e:00.0 00:03:31.599 Cleaning up... 00:03:31.599 00:03:31.599 real 0m4.938s 00:03:31.599 user 0m3.503s 00:03:31.599 sys 0m0.505s 00:03:31.599 16:05:02 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.599 16:05:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:31.599 ************************************ 00:03:31.599 END TEST env_dpdk_post_init 00:03:31.599 ************************************ 00:03:31.600 16:05:02 env -- env/env.sh@26 -- # uname 00:03:31.600 16:05:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:31.600 16:05:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:31.600 16:05:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.600 16:05:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.600 16:05:02 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.600 ************************************ 00:03:31.600 START TEST env_mem_callbacks 00:03:31.600 ************************************ 00:03:31.600 16:05:02 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:31.600 EAL: Detected CPU lcores: 96 00:03:31.600 EAL: Detected NUMA nodes: 2 00:03:31.600 EAL: Detected shared linkage of DPDK 00:03:31.600 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:31.600 EAL: Selected IOVA mode 'VA' 00:03:31.600 EAL: VFIO support initialized 00:03:31.600 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:31.600 00:03:31.600 00:03:31.600 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.600 http://cunit.sourceforge.net/ 00:03:31.600 00:03:31.600 00:03:31.600 Suite: memory 00:03:31.600 Test: test ... 00:03:31.600 register 0x200000200000 2097152 00:03:31.600 malloc 3145728 00:03:31.600 register 0x200000400000 4194304 00:03:31.600 buf 0x200000500000 len 3145728 PASSED 00:03:31.600 malloc 64 00:03:31.600 buf 0x2000004fff40 len 64 PASSED 00:03:31.600 malloc 4194304 00:03:31.600 register 0x200000800000 6291456 00:03:31.600 buf 0x200000a00000 len 4194304 PASSED 00:03:31.600 free 0x200000500000 3145728 00:03:31.600 free 0x2000004fff40 64 00:03:31.600 unregister 0x200000400000 4194304 PASSED 00:03:31.600 free 0x200000a00000 4194304 00:03:31.600 unregister 0x200000800000 6291456 PASSED 00:03:31.600 malloc 8388608 00:03:31.600 register 0x200000400000 10485760 00:03:31.600 buf 0x200000600000 len 8388608 PASSED 00:03:31.600 free 0x200000600000 8388608 00:03:31.600 unregister 0x200000400000 10485760 PASSED 00:03:31.600 passed 00:03:31.600 00:03:31.600 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.600 suites 1 1 n/a 0 0 00:03:31.600 tests 1 1 1 0 0 00:03:31.600 asserts 15 15 15 0 n/a 00:03:31.600 00:03:31.600 Elapsed time = 0.008 seconds 00:03:31.600 00:03:31.600 real 0m0.058s 00:03:31.600 user 0m0.021s 00:03:31.600 sys 0m0.037s 00:03:31.600 16:05:02 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.600 16:05:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:31.600 ************************************ 00:03:31.600 END TEST env_mem_callbacks 00:03:31.600 ************************************ 00:03:31.600 00:03:31.600 real 0m6.834s 00:03:31.600 user 0m4.556s 00:03:31.600 sys 0m1.353s 00:03:31.600 16:05:02 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.600 16:05:02 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.600 ************************************ 00:03:31.600 END TEST env 00:03:31.600 ************************************ 00:03:31.600 16:05:02 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:31.600 16:05:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.600 16:05:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.600 16:05:02 -- common/autotest_common.sh@10 -- # set +x 00:03:31.600 ************************************ 00:03:31.600 START TEST rpc 00:03:31.600 ************************************ 00:03:31.600 16:05:02 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:31.600 * Looking for test storage... 00:03:31.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:31.859 16:05:02 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:31.859 16:05:02 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:31.859 16:05:02 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:31.859 16:05:02 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:31.859 16:05:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.859 16:05:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.859 16:05:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.859 16:05:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.859 16:05:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.859 16:05:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.859 16:05:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.859 16:05:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.859 16:05:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.859 16:05:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.859 16:05:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.859 16:05:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:31.859 16:05:02 rpc -- scripts/common.sh@345 -- # : 1 00:03:31.859 16:05:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.859 16:05:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.859 16:05:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:31.859 16:05:02 rpc -- scripts/common.sh@353 -- # local d=1 00:03:31.859 16:05:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.859 16:05:02 rpc -- scripts/common.sh@355 -- # echo 1 00:03:31.859 16:05:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.859 16:05:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:31.859 16:05:02 rpc -- scripts/common.sh@353 -- # local d=2 00:03:31.859 16:05:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.859 16:05:02 rpc -- scripts/common.sh@355 -- # echo 2 00:03:31.859 16:05:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.859 16:05:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.859 16:05:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.859 16:05:02 rpc -- scripts/common.sh@368 -- # return 0 00:03:31.859 16:05:02 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.859 16:05:02 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:31.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.860 --rc genhtml_branch_coverage=1 00:03:31.860 --rc genhtml_function_coverage=1 00:03:31.860 --rc genhtml_legend=1 00:03:31.860 --rc geninfo_all_blocks=1 00:03:31.860 --rc geninfo_unexecuted_blocks=1 00:03:31.860 00:03:31.860 ' 00:03:31.860 16:05:02 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:31.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.860 --rc genhtml_branch_coverage=1 00:03:31.860 --rc genhtml_function_coverage=1 00:03:31.860 --rc genhtml_legend=1 00:03:31.860 --rc geninfo_all_blocks=1 00:03:31.860 --rc geninfo_unexecuted_blocks=1 00:03:31.860 00:03:31.860 ' 00:03:31.860 16:05:02 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:31.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.860 --rc genhtml_branch_coverage=1 00:03:31.860 --rc genhtml_function_coverage=1 00:03:31.860 --rc genhtml_legend=1 00:03:31.860 --rc geninfo_all_blocks=1 00:03:31.860 --rc geninfo_unexecuted_blocks=1 00:03:31.860 00:03:31.860 ' 00:03:31.860 16:05:02 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:31.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.860 --rc genhtml_branch_coverage=1 00:03:31.860 --rc genhtml_function_coverage=1 00:03:31.860 --rc genhtml_legend=1 00:03:31.860 --rc geninfo_all_blocks=1 00:03:31.860 --rc geninfo_unexecuted_blocks=1 00:03:31.860 00:03:31.860 ' 00:03:31.860 16:05:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1723291 00:03:31.860 16:05:02 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:31.860 16:05:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:31.860 16:05:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1723291 00:03:31.860 16:05:02 rpc -- common/autotest_common.sh@835 -- # '[' -z 1723291 ']' 00:03:31.860 16:05:02 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:31.860 16:05:02 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:31.860 16:05:02 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:31.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:31.860 16:05:02 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:31.860 16:05:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.860 [2024-11-20 16:05:02.972458] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:03:31.860 [2024-11-20 16:05:02.972502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723291 ] 00:03:31.860 [2024-11-20 16:05:03.028657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.860 [2024-11-20 16:05:03.067576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:31.860 [2024-11-20 16:05:03.067616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1723291' to capture a snapshot of events at runtime. 00:03:31.860 [2024-11-20 16:05:03.067625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:31.860 [2024-11-20 16:05:03.067631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:31.860 [2024-11-20 16:05:03.067636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1723291 for offline analysis/debug. 00:03:31.860 [2024-11-20 16:05:03.068226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:32.119 16:05:03 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:32.119 16:05:03 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:32.119 16:05:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:32.119 16:05:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:32.119 16:05:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:32.119 16:05:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:32.119 16:05:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.119 16:05:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.119 16:05:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.119 ************************************ 00:03:32.119 START TEST rpc_integrity 00:03:32.119 ************************************ 00:03:32.119 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:32.119 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:32.119 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.119 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.119 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.119 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:32.119 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:32.378 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:32.378 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:32.378 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.378 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.378 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.378 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:32.378 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:32.378 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.378 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.378 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.378 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:32.378 { 00:03:32.378 "name": "Malloc0", 00:03:32.378 "aliases": [ 00:03:32.378 "3476b262-a00d-476c-a077-5c3e6fecb934" 00:03:32.378 ], 00:03:32.378 "product_name": "Malloc disk", 00:03:32.378 "block_size": 512, 00:03:32.378 "num_blocks": 16384, 00:03:32.378 "uuid": "3476b262-a00d-476c-a077-5c3e6fecb934", 00:03:32.378 "assigned_rate_limits": { 00:03:32.378 "rw_ios_per_sec": 0, 00:03:32.378 "rw_mbytes_per_sec": 0, 00:03:32.378 "r_mbytes_per_sec": 0, 00:03:32.378 "w_mbytes_per_sec": 0 00:03:32.378 }, 00:03:32.378 "claimed": false, 00:03:32.378 "zoned": false, 00:03:32.378 "supported_io_types": { 00:03:32.378 "read": true, 00:03:32.378 "write": true, 00:03:32.378 "unmap": true, 00:03:32.378 "flush": true, 00:03:32.378 "reset": true, 00:03:32.378 "nvme_admin": false, 00:03:32.378 "nvme_io": false, 00:03:32.378 "nvme_io_md": false, 00:03:32.378 "write_zeroes": true, 00:03:32.378 "zcopy": true, 00:03:32.378 "get_zone_info": false, 00:03:32.378 "zone_management": false, 00:03:32.378 "zone_append": false, 00:03:32.378 "compare": false, 00:03:32.378 "compare_and_write": false, 00:03:32.378 "abort": true, 00:03:32.378 "seek_hole": false, 00:03:32.378 "seek_data": false, 00:03:32.378 "copy": true, 00:03:32.378 "nvme_iov_md": false 00:03:32.378 }, 00:03:32.378 "memory_domains": [ 00:03:32.378 { 00:03:32.378 "dma_device_id": "system", 00:03:32.378 "dma_device_type": 1 00:03:32.378 }, 00:03:32.378 { 00:03:32.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.378 "dma_device_type": 2 00:03:32.378 } 00:03:32.378 ], 00:03:32.378 "driver_specific": {} 00:03:32.378 } 00:03:32.378 ]' 00:03:32.378 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:32.378 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:32.378 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:32.378 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.378 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.378 [2024-11-20 16:05:03.457894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:32.378 [2024-11-20 16:05:03.457925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:32.378 [2024-11-20 16:05:03.457938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1395280 00:03:32.378 [2024-11-20 16:05:03.457944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:32.378 [2024-11-20 16:05:03.459030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:32.378 [2024-11-20 16:05:03.459052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:32.378 Passthru0 00:03:32.378 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.378 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:32.378 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.378 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.378 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.378 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:32.378 { 00:03:32.378 "name": "Malloc0", 00:03:32.378 "aliases": [ 00:03:32.378 "3476b262-a00d-476c-a077-5c3e6fecb934" 00:03:32.378 ], 00:03:32.378 "product_name": "Malloc disk", 00:03:32.378 "block_size": 512, 00:03:32.378 "num_blocks": 16384, 00:03:32.378 "uuid": "3476b262-a00d-476c-a077-5c3e6fecb934", 00:03:32.378 "assigned_rate_limits": { 00:03:32.378 "rw_ios_per_sec": 0, 00:03:32.378 "rw_mbytes_per_sec": 0, 00:03:32.378 "r_mbytes_per_sec": 0, 00:03:32.378 "w_mbytes_per_sec": 0 00:03:32.378 }, 00:03:32.378 "claimed": true, 00:03:32.378 "claim_type": "exclusive_write", 00:03:32.378 "zoned": false, 00:03:32.378 "supported_io_types": { 00:03:32.378 "read": true, 00:03:32.378 "write": true, 00:03:32.378 "unmap": true, 00:03:32.378 "flush": true, 00:03:32.378 "reset": true, 00:03:32.379 "nvme_admin": false, 00:03:32.379 "nvme_io": false, 00:03:32.379 "nvme_io_md": false, 00:03:32.379 "write_zeroes": true, 00:03:32.379 "zcopy": true, 00:03:32.379 "get_zone_info": false, 00:03:32.379 "zone_management": false, 00:03:32.379 "zone_append": false, 00:03:32.379 "compare": false, 00:03:32.379 "compare_and_write": false, 00:03:32.379 "abort": true, 00:03:32.379 "seek_hole": false, 00:03:32.379 "seek_data": false, 00:03:32.379 "copy": true, 00:03:32.379 "nvme_iov_md": false 00:03:32.379 }, 00:03:32.379 "memory_domains": [ 00:03:32.379 { 00:03:32.379 "dma_device_id": "system", 00:03:32.379 "dma_device_type": 1 00:03:32.379 }, 00:03:32.379 { 00:03:32.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.379 "dma_device_type": 2 00:03:32.379 } 00:03:32.379 ], 00:03:32.379 "driver_specific": {} 00:03:32.379 }, 00:03:32.379 { 00:03:32.379 "name": "Passthru0", 00:03:32.379 "aliases": [ 00:03:32.379 "f26294b8-23bc-54c6-a6d5-24fdde2a2dc3" 00:03:32.379 ], 00:03:32.379 "product_name": "passthru", 00:03:32.379 "block_size": 512, 00:03:32.379 "num_blocks": 16384, 00:03:32.379 "uuid": "f26294b8-23bc-54c6-a6d5-24fdde2a2dc3", 00:03:32.379 "assigned_rate_limits": { 00:03:32.379 "rw_ios_per_sec": 0, 00:03:32.379 "rw_mbytes_per_sec": 0, 00:03:32.379 "r_mbytes_per_sec": 0, 00:03:32.379 "w_mbytes_per_sec": 0 00:03:32.379 }, 00:03:32.379 "claimed": false, 00:03:32.379 "zoned": false, 00:03:32.379 "supported_io_types": { 00:03:32.379 "read": true, 00:03:32.379 "write": true, 00:03:32.379 "unmap": true, 00:03:32.379 "flush": true, 00:03:32.379 "reset": true, 00:03:32.379 "nvme_admin": false, 00:03:32.379 "nvme_io": false, 00:03:32.379 "nvme_io_md": false, 00:03:32.379 "write_zeroes": true, 00:03:32.379 "zcopy": true, 00:03:32.379 "get_zone_info": false, 00:03:32.379 "zone_management": false, 00:03:32.379 "zone_append": false, 00:03:32.379 "compare": false, 00:03:32.379 "compare_and_write": false, 00:03:32.379 "abort": true, 00:03:32.379 "seek_hole": false, 00:03:32.379 "seek_data": false, 00:03:32.379 "copy": true, 00:03:32.379 "nvme_iov_md": false 00:03:32.379 }, 00:03:32.379 "memory_domains": [ 00:03:32.379 { 00:03:32.379 "dma_device_id": "system", 00:03:32.379 "dma_device_type": 1 00:03:32.379 }, 00:03:32.379 { 00:03:32.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.379 "dma_device_type": 2 00:03:32.379 } 00:03:32.379 ], 00:03:32.379 "driver_specific": { 00:03:32.379 "passthru": { 00:03:32.379 "name": "Passthru0", 00:03:32.379 "base_bdev_name": "Malloc0" 00:03:32.379 } 00:03:32.379 } 00:03:32.379 } 00:03:32.379 ]' 00:03:32.379 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:32.379 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:32.379 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:32.379 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.379 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.379 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.379 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:32.379 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.379 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.379 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.379 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:32.379 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.379 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.379 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.379 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:32.379 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:32.379 16:05:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:32.379 00:03:32.379 real 0m0.266s 00:03:32.379 user 0m0.170s 00:03:32.379 sys 0m0.045s 00:03:32.379 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.379 16:05:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.379 ************************************ 00:03:32.379 END TEST rpc_integrity 00:03:32.379 ************************************ 00:03:32.638 16:05:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:32.638 16:05:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.638 16:05:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.638 16:05:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.638 ************************************ 00:03:32.638 START TEST rpc_plugins 00:03:32.638 ************************************ 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:32.638 16:05:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.638 16:05:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:32.638 16:05:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.638 16:05:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:32.638 { 00:03:32.638 "name": "Malloc1", 00:03:32.638 "aliases": [ 00:03:32.638 "ec2b904f-de6d-4232-9504-f4f1fa0ef71d" 00:03:32.638 ], 00:03:32.638 "product_name": "Malloc disk", 00:03:32.638 "block_size": 4096, 00:03:32.638 "num_blocks": 256, 00:03:32.638 "uuid": "ec2b904f-de6d-4232-9504-f4f1fa0ef71d", 00:03:32.638 "assigned_rate_limits": { 00:03:32.638 "rw_ios_per_sec": 0, 00:03:32.638 "rw_mbytes_per_sec": 0, 00:03:32.638 "r_mbytes_per_sec": 0, 00:03:32.638 "w_mbytes_per_sec": 0 00:03:32.638 }, 00:03:32.638 "claimed": false, 00:03:32.638 "zoned": false, 00:03:32.638 "supported_io_types": { 00:03:32.638 "read": true, 00:03:32.638 "write": true, 00:03:32.638 "unmap": true, 00:03:32.638 "flush": true, 00:03:32.638 "reset": true, 00:03:32.638 "nvme_admin": false, 00:03:32.638 "nvme_io": false, 00:03:32.638 "nvme_io_md": false, 00:03:32.638 "write_zeroes": true, 00:03:32.638 "zcopy": true, 00:03:32.638 "get_zone_info": false, 00:03:32.638 "zone_management": false, 00:03:32.638 "zone_append": false, 00:03:32.638 "compare": false, 00:03:32.638 "compare_and_write": false, 00:03:32.638 "abort": true, 00:03:32.638 "seek_hole": false, 00:03:32.638 "seek_data": false, 00:03:32.638 "copy": true, 00:03:32.638 "nvme_iov_md": false 00:03:32.638 }, 00:03:32.638 "memory_domains": [ 00:03:32.638 { 00:03:32.638 "dma_device_id": "system", 00:03:32.638 "dma_device_type": 1 00:03:32.638 }, 00:03:32.638 { 00:03:32.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.638 "dma_device_type": 2 00:03:32.638 } 00:03:32.638 ], 00:03:32.638 "driver_specific": {} 00:03:32.638 } 00:03:32.638 ]' 00:03:32.638 16:05:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:32.638 16:05:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:32.638 16:05:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.638 16:05:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.638 16:05:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:32.638 16:05:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:32.638 16:05:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:32.638 00:03:32.638 real 0m0.139s 00:03:32.638 user 0m0.096s 00:03:32.638 sys 0m0.014s 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.638 16:05:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:32.638 ************************************ 00:03:32.638 END TEST rpc_plugins 00:03:32.638 ************************************ 00:03:32.638 16:05:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:32.638 16:05:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.638 16:05:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.638 16:05:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.638 ************************************ 00:03:32.638 START TEST rpc_trace_cmd_test 00:03:32.638 ************************************ 00:03:32.638 16:05:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:32.638 16:05:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:32.897 16:05:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:32.897 16:05:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.897 16:05:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:32.897 16:05:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.897 16:05:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:32.897 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1723291", 00:03:32.897 "tpoint_group_mask": "0x8", 00:03:32.897 "iscsi_conn": { 00:03:32.897 "mask": "0x2", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "scsi": { 00:03:32.897 "mask": "0x4", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "bdev": { 00:03:32.897 "mask": "0x8", 00:03:32.897 "tpoint_mask": "0xffffffffffffffff" 00:03:32.897 }, 00:03:32.897 "nvmf_rdma": { 00:03:32.897 "mask": "0x10", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "nvmf_tcp": { 00:03:32.897 "mask": "0x20", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "ftl": { 00:03:32.897 "mask": "0x40", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "blobfs": { 00:03:32.897 "mask": "0x80", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "dsa": { 00:03:32.897 "mask": "0x200", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "thread": { 00:03:32.897 "mask": "0x400", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "nvme_pcie": { 00:03:32.897 "mask": "0x800", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "iaa": { 00:03:32.897 "mask": "0x1000", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "nvme_tcp": { 00:03:32.897 "mask": "0x2000", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "bdev_nvme": { 00:03:32.897 "mask": "0x4000", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "sock": { 00:03:32.897 "mask": "0x8000", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "blob": { 00:03:32.897 "mask": "0x10000", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "bdev_raid": { 00:03:32.897 "mask": "0x20000", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 }, 00:03:32.897 "scheduler": { 00:03:32.897 "mask": "0x40000", 00:03:32.897 "tpoint_mask": "0x0" 00:03:32.897 } 00:03:32.897 }' 00:03:32.897 16:05:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:32.897 16:05:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:32.897 16:05:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:32.897 16:05:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:32.897 16:05:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:32.897 16:05:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:32.897 16:05:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:32.897 16:05:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:32.897 16:05:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:32.897 16:05:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:32.897 00:03:32.897 real 0m0.227s 00:03:32.897 user 0m0.194s 00:03:32.897 sys 0m0.025s 00:03:32.897 16:05:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.897 16:05:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:32.897 ************************************ 00:03:32.897 END TEST rpc_trace_cmd_test 00:03:32.897 ************************************ 00:03:32.897 16:05:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:32.897 16:05:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:32.897 16:05:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:32.897 16:05:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.157 16:05:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.157 16:05:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.157 ************************************ 00:03:33.157 START TEST rpc_daemon_integrity 00:03:33.157 ************************************ 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:33.157 { 00:03:33.157 "name": "Malloc2", 00:03:33.157 "aliases": [ 00:03:33.157 "84ce4479-3a21-4b3c-8971-84310e9a0597" 00:03:33.157 ], 00:03:33.157 "product_name": "Malloc disk", 00:03:33.157 "block_size": 512, 00:03:33.157 "num_blocks": 16384, 00:03:33.157 "uuid": "84ce4479-3a21-4b3c-8971-84310e9a0597", 00:03:33.157 "assigned_rate_limits": { 00:03:33.157 "rw_ios_per_sec": 0, 00:03:33.157 "rw_mbytes_per_sec": 0, 00:03:33.157 "r_mbytes_per_sec": 0, 00:03:33.157 "w_mbytes_per_sec": 0 00:03:33.157 }, 00:03:33.157 "claimed": false, 00:03:33.157 "zoned": false, 00:03:33.157 "supported_io_types": { 00:03:33.157 "read": true, 00:03:33.157 "write": true, 00:03:33.157 "unmap": true, 00:03:33.157 "flush": true, 00:03:33.157 "reset": true, 00:03:33.157 "nvme_admin": false, 00:03:33.157 "nvme_io": false, 00:03:33.157 "nvme_io_md": false, 00:03:33.157 "write_zeroes": true, 00:03:33.157 "zcopy": true, 00:03:33.157 "get_zone_info": false, 00:03:33.157 "zone_management": false, 00:03:33.157 "zone_append": false, 00:03:33.157 "compare": false, 00:03:33.157 "compare_and_write": false, 00:03:33.157 "abort": true, 00:03:33.157 "seek_hole": false, 00:03:33.157 "seek_data": false, 00:03:33.157 "copy": true, 00:03:33.157 "nvme_iov_md": false 00:03:33.157 }, 00:03:33.157 "memory_domains": [ 00:03:33.157 { 00:03:33.157 "dma_device_id": "system", 00:03:33.157 "dma_device_type": 1 00:03:33.157 }, 00:03:33.157 { 00:03:33.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.157 "dma_device_type": 2 00:03:33.157 } 00:03:33.157 ], 00:03:33.157 "driver_specific": {} 00:03:33.157 } 00:03:33.157 ]' 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.157 [2024-11-20 16:05:04.288167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:33.157 [2024-11-20 16:05:04.288199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:33.157 [2024-11-20 16:05:04.288219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1397150 00:03:33.157 [2024-11-20 16:05:04.288226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:33.157 [2024-11-20 16:05:04.289227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:33.157 [2024-11-20 16:05:04.289247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:33.157 Passthru0 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.157 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:33.157 { 00:03:33.157 "name": "Malloc2", 00:03:33.157 "aliases": [ 00:03:33.157 "84ce4479-3a21-4b3c-8971-84310e9a0597" 00:03:33.157 ], 00:03:33.157 "product_name": "Malloc disk", 00:03:33.157 "block_size": 512, 00:03:33.157 "num_blocks": 16384, 00:03:33.157 "uuid": "84ce4479-3a21-4b3c-8971-84310e9a0597", 00:03:33.157 "assigned_rate_limits": { 00:03:33.158 "rw_ios_per_sec": 0, 00:03:33.158 "rw_mbytes_per_sec": 0, 00:03:33.158 "r_mbytes_per_sec": 0, 00:03:33.158 "w_mbytes_per_sec": 0 00:03:33.158 }, 00:03:33.158 "claimed": true, 00:03:33.158 "claim_type": "exclusive_write", 00:03:33.158 "zoned": false, 00:03:33.158 "supported_io_types": { 00:03:33.158 "read": true, 00:03:33.158 "write": true, 00:03:33.158 "unmap": true, 00:03:33.158 "flush": true, 00:03:33.158 "reset": true, 00:03:33.158 "nvme_admin": false, 00:03:33.158 "nvme_io": false, 00:03:33.158 "nvme_io_md": false, 00:03:33.158 "write_zeroes": true, 00:03:33.158 "zcopy": true, 00:03:33.158 "get_zone_info": false, 00:03:33.158 "zone_management": false, 00:03:33.158 "zone_append": false, 00:03:33.158 "compare": false, 00:03:33.158 "compare_and_write": false, 00:03:33.158 "abort": true, 00:03:33.158 "seek_hole": false, 00:03:33.158 "seek_data": false, 00:03:33.158 "copy": true, 00:03:33.158 "nvme_iov_md": false 00:03:33.158 }, 00:03:33.158 "memory_domains": [ 00:03:33.158 { 00:03:33.158 "dma_device_id": "system", 00:03:33.158 "dma_device_type": 1 00:03:33.158 }, 00:03:33.158 { 00:03:33.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.158 "dma_device_type": 2 00:03:33.158 } 00:03:33.158 ], 00:03:33.158 "driver_specific": {} 00:03:33.158 }, 00:03:33.158 { 00:03:33.158 "name": "Passthru0", 00:03:33.158 "aliases": [ 00:03:33.158 "04826226-13e4-5c78-b33b-9fc93035d549" 00:03:33.158 ], 00:03:33.158 "product_name": "passthru", 00:03:33.158 "block_size": 512, 00:03:33.158 "num_blocks": 16384, 00:03:33.158 "uuid": "04826226-13e4-5c78-b33b-9fc93035d549", 00:03:33.158 "assigned_rate_limits": { 00:03:33.158 "rw_ios_per_sec": 0, 00:03:33.158 "rw_mbytes_per_sec": 0, 00:03:33.158 "r_mbytes_per_sec": 0, 00:03:33.158 "w_mbytes_per_sec": 0 00:03:33.158 }, 00:03:33.158 "claimed": false, 00:03:33.158 "zoned": false, 00:03:33.158 "supported_io_types": { 00:03:33.158 "read": true, 00:03:33.158 "write": true, 00:03:33.158 "unmap": true, 00:03:33.158 "flush": true, 00:03:33.158 "reset": true, 00:03:33.158 "nvme_admin": false, 00:03:33.158 "nvme_io": false, 00:03:33.158 "nvme_io_md": false, 00:03:33.158 "write_zeroes": true, 00:03:33.158 "zcopy": true, 00:03:33.158 "get_zone_info": false, 00:03:33.158 "zone_management": false, 00:03:33.158 "zone_append": false, 00:03:33.158 "compare": false, 00:03:33.158 "compare_and_write": false, 00:03:33.158 "abort": true, 00:03:33.158 "seek_hole": false, 00:03:33.158 "seek_data": false, 00:03:33.158 "copy": true, 00:03:33.158 "nvme_iov_md": false 00:03:33.158 }, 00:03:33.158 "memory_domains": [ 00:03:33.158 { 00:03:33.158 "dma_device_id": "system", 00:03:33.158 "dma_device_type": 1 00:03:33.158 }, 00:03:33.158 { 00:03:33.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.158 "dma_device_type": 2 00:03:33.158 } 00:03:33.158 ], 00:03:33.158 "driver_specific": { 00:03:33.158 "passthru": { 00:03:33.158 "name": "Passthru0", 00:03:33.158 "base_bdev_name": "Malloc2" 00:03:33.158 } 00:03:33.158 } 00:03:33.158 } 00:03:33.158 ]' 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:33.158 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:33.417 16:05:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:33.417 00:03:33.417 real 0m0.254s 00:03:33.417 user 0m0.171s 00:03:33.417 sys 0m0.033s 00:03:33.417 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.417 16:05:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.417 ************************************ 00:03:33.417 END TEST rpc_daemon_integrity 00:03:33.417 ************************************ 00:03:33.417 16:05:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:33.417 16:05:04 rpc -- rpc/rpc.sh@84 -- # killprocess 1723291 00:03:33.417 16:05:04 rpc -- common/autotest_common.sh@954 -- # '[' -z 1723291 ']' 00:03:33.417 16:05:04 rpc -- common/autotest_common.sh@958 -- # kill -0 1723291 00:03:33.417 16:05:04 rpc -- common/autotest_common.sh@959 -- # uname 00:03:33.417 16:05:04 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:33.417 16:05:04 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1723291 00:03:33.417 16:05:04 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:33.417 16:05:04 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:33.417 16:05:04 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1723291' 00:03:33.417 killing process with pid 1723291 00:03:33.417 16:05:04 rpc -- common/autotest_common.sh@973 -- # kill 1723291 00:03:33.417 16:05:04 rpc -- common/autotest_common.sh@978 -- # wait 1723291 00:03:33.676 00:03:33.676 real 0m2.057s 00:03:33.676 user 0m2.658s 00:03:33.676 sys 0m0.689s 00:03:33.676 16:05:04 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.676 16:05:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.676 ************************************ 00:03:33.676 END TEST rpc 00:03:33.676 ************************************ 00:03:33.676 16:05:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:33.676 16:05:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.676 16:05:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.676 16:05:04 -- common/autotest_common.sh@10 -- # set +x 00:03:33.676 ************************************ 00:03:33.676 START TEST skip_rpc 00:03:33.676 ************************************ 00:03:33.676 16:05:04 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:33.936 * Looking for test storage... 00:03:33.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.936 16:05:04 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.936 16:05:04 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.936 16:05:04 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.936 16:05:05 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.936 16:05:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:33.936 16:05:05 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.936 16:05:05 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.936 --rc genhtml_branch_coverage=1 00:03:33.936 --rc genhtml_function_coverage=1 00:03:33.936 --rc genhtml_legend=1 00:03:33.936 --rc geninfo_all_blocks=1 00:03:33.936 --rc geninfo_unexecuted_blocks=1 00:03:33.936 00:03:33.936 ' 00:03:33.936 16:05:05 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.936 --rc genhtml_branch_coverage=1 00:03:33.936 --rc genhtml_function_coverage=1 00:03:33.936 --rc genhtml_legend=1 00:03:33.936 --rc geninfo_all_blocks=1 00:03:33.936 --rc geninfo_unexecuted_blocks=1 00:03:33.936 00:03:33.936 ' 00:03:33.936 16:05:05 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.936 --rc genhtml_branch_coverage=1 00:03:33.936 --rc genhtml_function_coverage=1 00:03:33.936 --rc genhtml_legend=1 00:03:33.936 --rc geninfo_all_blocks=1 00:03:33.936 --rc geninfo_unexecuted_blocks=1 00:03:33.936 00:03:33.936 ' 00:03:33.936 16:05:05 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.936 --rc genhtml_branch_coverage=1 00:03:33.936 --rc genhtml_function_coverage=1 00:03:33.936 --rc genhtml_legend=1 00:03:33.936 --rc geninfo_all_blocks=1 00:03:33.936 --rc geninfo_unexecuted_blocks=1 00:03:33.936 00:03:33.936 ' 00:03:33.936 16:05:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:33.936 16:05:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:33.936 16:05:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:33.936 16:05:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.936 16:05:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.936 16:05:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.936 ************************************ 00:03:33.936 START TEST skip_rpc 00:03:33.936 ************************************ 00:03:33.936 16:05:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:33.936 16:05:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1723796 00:03:33.936 16:05:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:33.936 16:05:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:33.936 16:05:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:33.936 [2024-11-20 16:05:05.141990] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:03:33.936 [2024-11-20 16:05:05.142031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723796 ] 00:03:34.196 [2024-11-20 16:05:05.220412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:34.196 [2024-11-20 16:05:05.259989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1723796 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1723796 ']' 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1723796 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1723796 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1723796' 00:03:39.467 killing process with pid 1723796 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1723796 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1723796 00:03:39.467 00:03:39.467 real 0m5.371s 00:03:39.467 user 0m5.130s 00:03:39.467 sys 0m0.277s 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.467 16:05:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.467 ************************************ 00:03:39.467 END TEST skip_rpc 00:03:39.467 ************************************ 00:03:39.467 16:05:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:39.467 16:05:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.467 16:05:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.467 16:05:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.467 ************************************ 00:03:39.467 START TEST skip_rpc_with_json 00:03:39.467 ************************************ 00:03:39.467 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:39.467 16:05:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:39.467 16:05:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1724717 00:03:39.467 16:05:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:39.467 16:05:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:39.467 16:05:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1724717 00:03:39.467 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1724717 ']' 00:03:39.467 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:39.467 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:39.467 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:39.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:39.467 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:39.467 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.467 [2024-11-20 16:05:10.586886] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:03:39.467 [2024-11-20 16:05:10.586936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724717 ] 00:03:39.467 [2024-11-20 16:05:10.663364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.726 [2024-11-20 16:05:10.704509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.726 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:39.726 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:39.727 16:05:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:39.727 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.727 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.727 [2024-11-20 16:05:10.931473] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:39.727 request: 00:03:39.727 { 00:03:39.727 "trtype": "tcp", 00:03:39.727 "method": "nvmf_get_transports", 00:03:39.727 "req_id": 1 00:03:39.727 } 00:03:39.727 Got JSON-RPC error response 00:03:39.727 response: 00:03:39.727 { 00:03:39.727 "code": -19, 00:03:39.727 "message": "No such device" 00:03:39.727 } 00:03:39.727 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:39.727 16:05:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:39.727 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.727 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.727 [2024-11-20 16:05:10.943578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:39.727 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.727 16:05:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:39.727 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.727 16:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.986 16:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.986 16:05:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:39.986 { 00:03:39.986 "subsystems": [ 00:03:39.986 { 00:03:39.986 "subsystem": "fsdev", 00:03:39.986 "config": [ 00:03:39.986 { 00:03:39.986 "method": "fsdev_set_opts", 00:03:39.986 "params": { 00:03:39.986 "fsdev_io_pool_size": 65535, 00:03:39.986 "fsdev_io_cache_size": 256 00:03:39.986 } 00:03:39.986 } 00:03:39.986 ] 00:03:39.986 }, 00:03:39.986 { 00:03:39.986 "subsystem": "vfio_user_target", 00:03:39.986 "config": null 00:03:39.986 }, 00:03:39.986 { 00:03:39.986 "subsystem": "keyring", 00:03:39.986 "config": [] 00:03:39.986 }, 00:03:39.986 { 00:03:39.986 "subsystem": "iobuf", 00:03:39.986 "config": [ 00:03:39.986 { 00:03:39.986 "method": "iobuf_set_options", 00:03:39.986 "params": { 00:03:39.986 "small_pool_count": 8192, 00:03:39.986 "large_pool_count": 1024, 00:03:39.986 "small_bufsize": 8192, 00:03:39.986 "large_bufsize": 135168, 00:03:39.986 "enable_numa": false 00:03:39.986 } 00:03:39.986 } 00:03:39.986 ] 00:03:39.986 }, 00:03:39.986 { 00:03:39.986 "subsystem": "sock", 00:03:39.986 "config": [ 00:03:39.986 { 00:03:39.986 "method": "sock_set_default_impl", 00:03:39.986 "params": { 00:03:39.986 "impl_name": "posix" 00:03:39.986 } 00:03:39.986 }, 00:03:39.986 { 00:03:39.986 "method": "sock_impl_set_options", 00:03:39.986 "params": { 00:03:39.986 "impl_name": "ssl", 00:03:39.986 "recv_buf_size": 4096, 00:03:39.986 "send_buf_size": 4096, 00:03:39.986 "enable_recv_pipe": true, 00:03:39.986 "enable_quickack": false, 00:03:39.986 "enable_placement_id": 0, 00:03:39.986 "enable_zerocopy_send_server": true, 00:03:39.986 "enable_zerocopy_send_client": false, 00:03:39.986 "zerocopy_threshold": 0, 00:03:39.986 "tls_version": 0, 00:03:39.986 "enable_ktls": false 00:03:39.986 } 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "method": "sock_impl_set_options", 00:03:39.987 "params": { 00:03:39.987 "impl_name": "posix", 00:03:39.987 "recv_buf_size": 2097152, 00:03:39.987 "send_buf_size": 2097152, 00:03:39.987 "enable_recv_pipe": true, 00:03:39.987 "enable_quickack": false, 00:03:39.987 "enable_placement_id": 0, 00:03:39.987 "enable_zerocopy_send_server": true, 00:03:39.987 "enable_zerocopy_send_client": false, 00:03:39.987 "zerocopy_threshold": 0, 00:03:39.987 "tls_version": 0, 00:03:39.987 "enable_ktls": false 00:03:39.987 } 00:03:39.987 } 00:03:39.987 ] 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "subsystem": "vmd", 00:03:39.987 "config": [] 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "subsystem": "accel", 00:03:39.987 "config": [ 00:03:39.987 { 00:03:39.987 "method": "accel_set_options", 00:03:39.987 "params": { 00:03:39.987 "small_cache_size": 128, 00:03:39.987 "large_cache_size": 16, 00:03:39.987 "task_count": 2048, 00:03:39.987 "sequence_count": 2048, 00:03:39.987 "buf_count": 2048 00:03:39.987 } 00:03:39.987 } 00:03:39.987 ] 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "subsystem": "bdev", 00:03:39.987 "config": [ 00:03:39.987 { 00:03:39.987 "method": "bdev_set_options", 00:03:39.987 "params": { 00:03:39.987 "bdev_io_pool_size": 65535, 00:03:39.987 "bdev_io_cache_size": 256, 00:03:39.987 "bdev_auto_examine": true, 00:03:39.987 "iobuf_small_cache_size": 128, 00:03:39.987 "iobuf_large_cache_size": 16 00:03:39.987 } 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "method": "bdev_raid_set_options", 00:03:39.987 "params": { 00:03:39.987 "process_window_size_kb": 1024, 00:03:39.987 "process_max_bandwidth_mb_sec": 0 00:03:39.987 } 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "method": "bdev_iscsi_set_options", 00:03:39.987 "params": { 00:03:39.987 "timeout_sec": 30 00:03:39.987 } 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "method": "bdev_nvme_set_options", 00:03:39.987 "params": { 00:03:39.987 "action_on_timeout": "none", 00:03:39.987 "timeout_us": 0, 00:03:39.987 "timeout_admin_us": 0, 00:03:39.987 "keep_alive_timeout_ms": 10000, 00:03:39.987 "arbitration_burst": 0, 00:03:39.987 "low_priority_weight": 0, 00:03:39.987 "medium_priority_weight": 0, 00:03:39.987 "high_priority_weight": 0, 00:03:39.987 "nvme_adminq_poll_period_us": 10000, 00:03:39.987 "nvme_ioq_poll_period_us": 0, 00:03:39.987 "io_queue_requests": 0, 00:03:39.987 "delay_cmd_submit": true, 00:03:39.987 "transport_retry_count": 4, 00:03:39.987 "bdev_retry_count": 3, 00:03:39.987 "transport_ack_timeout": 0, 00:03:39.987 "ctrlr_loss_timeout_sec": 0, 00:03:39.987 "reconnect_delay_sec": 0, 00:03:39.987 "fast_io_fail_timeout_sec": 0, 00:03:39.987 "disable_auto_failback": false, 00:03:39.987 "generate_uuids": false, 00:03:39.987 "transport_tos": 0, 00:03:39.987 "nvme_error_stat": false, 00:03:39.987 "rdma_srq_size": 0, 00:03:39.987 "io_path_stat": false, 00:03:39.987 "allow_accel_sequence": false, 00:03:39.987 "rdma_max_cq_size": 0, 00:03:39.987 "rdma_cm_event_timeout_ms": 0, 00:03:39.987 "dhchap_digests": [ 00:03:39.987 "sha256", 00:03:39.987 "sha384", 00:03:39.987 "sha512" 00:03:39.987 ], 00:03:39.987 "dhchap_dhgroups": [ 00:03:39.987 "null", 00:03:39.987 "ffdhe2048", 00:03:39.987 "ffdhe3072", 00:03:39.987 "ffdhe4096", 00:03:39.987 "ffdhe6144", 00:03:39.987 "ffdhe8192" 00:03:39.987 ] 00:03:39.987 } 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "method": "bdev_nvme_set_hotplug", 00:03:39.987 "params": { 00:03:39.987 "period_us": 100000, 00:03:39.987 "enable": false 00:03:39.987 } 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "method": "bdev_wait_for_examine" 00:03:39.987 } 00:03:39.987 ] 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "subsystem": "scsi", 00:03:39.987 "config": null 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "subsystem": "scheduler", 00:03:39.987 "config": [ 00:03:39.987 { 00:03:39.987 "method": "framework_set_scheduler", 00:03:39.987 "params": { 00:03:39.987 "name": "static" 00:03:39.987 } 00:03:39.987 } 00:03:39.987 ] 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "subsystem": "vhost_scsi", 00:03:39.987 "config": [] 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "subsystem": "vhost_blk", 00:03:39.987 "config": [] 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "subsystem": "ublk", 00:03:39.987 "config": [] 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "subsystem": "nbd", 00:03:39.987 "config": [] 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "subsystem": "nvmf", 00:03:39.987 "config": [ 00:03:39.987 { 00:03:39.987 "method": "nvmf_set_config", 00:03:39.987 "params": { 00:03:39.987 "discovery_filter": "match_any", 00:03:39.987 "admin_cmd_passthru": { 00:03:39.987 "identify_ctrlr": false 00:03:39.987 }, 00:03:39.987 "dhchap_digests": [ 00:03:39.987 "sha256", 00:03:39.987 "sha384", 00:03:39.987 "sha512" 00:03:39.987 ], 00:03:39.987 "dhchap_dhgroups": [ 00:03:39.987 "null", 00:03:39.987 "ffdhe2048", 00:03:39.987 "ffdhe3072", 00:03:39.987 "ffdhe4096", 00:03:39.987 "ffdhe6144", 00:03:39.987 "ffdhe8192" 00:03:39.987 ] 00:03:39.987 } 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "method": "nvmf_set_max_subsystems", 00:03:39.987 "params": { 00:03:39.987 "max_subsystems": 1024 00:03:39.987 } 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "method": "nvmf_set_crdt", 00:03:39.987 "params": { 00:03:39.987 "crdt1": 0, 00:03:39.987 "crdt2": 0, 00:03:39.987 "crdt3": 0 00:03:39.987 } 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "method": "nvmf_create_transport", 00:03:39.987 "params": { 00:03:39.987 "trtype": "TCP", 00:03:39.987 "max_queue_depth": 128, 00:03:39.987 "max_io_qpairs_per_ctrlr": 127, 00:03:39.987 "in_capsule_data_size": 4096, 00:03:39.987 "max_io_size": 131072, 00:03:39.987 "io_unit_size": 131072, 00:03:39.987 "max_aq_depth": 128, 00:03:39.987 "num_shared_buffers": 511, 00:03:39.987 "buf_cache_size": 4294967295, 00:03:39.987 "dif_insert_or_strip": false, 00:03:39.987 "zcopy": false, 00:03:39.987 "c2h_success": true, 00:03:39.987 "sock_priority": 0, 00:03:39.987 "abort_timeout_sec": 1, 00:03:39.987 "ack_timeout": 0, 00:03:39.987 "data_wr_pool_size": 0 00:03:39.987 } 00:03:39.987 } 00:03:39.987 ] 00:03:39.987 }, 00:03:39.987 { 00:03:39.987 "subsystem": "iscsi", 00:03:39.987 "config": [ 00:03:39.987 { 00:03:39.987 "method": "iscsi_set_options", 00:03:39.987 "params": { 00:03:39.987 "node_base": "iqn.2016-06.io.spdk", 00:03:39.987 "max_sessions": 128, 00:03:39.987 "max_connections_per_session": 2, 00:03:39.987 "max_queue_depth": 64, 00:03:39.987 "default_time2wait": 2, 00:03:39.987 "default_time2retain": 20, 00:03:39.987 "first_burst_length": 8192, 00:03:39.987 "immediate_data": true, 00:03:39.987 "allow_duplicated_isid": false, 00:03:39.987 "error_recovery_level": 0, 00:03:39.987 "nop_timeout": 60, 00:03:39.987 "nop_in_interval": 30, 00:03:39.987 "disable_chap": false, 00:03:39.987 "require_chap": false, 00:03:39.987 "mutual_chap": false, 00:03:39.987 "chap_group": 0, 00:03:39.987 "max_large_datain_per_connection": 64, 00:03:39.987 "max_r2t_per_connection": 4, 00:03:39.987 "pdu_pool_size": 36864, 00:03:39.987 "immediate_data_pool_size": 16384, 00:03:39.987 "data_out_pool_size": 2048 00:03:39.987 } 00:03:39.987 } 00:03:39.987 ] 00:03:39.987 } 00:03:39.987 ] 00:03:39.987 } 00:03:39.987 16:05:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:39.987 16:05:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1724717 00:03:39.987 16:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1724717 ']' 00:03:39.987 16:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1724717 00:03:39.987 16:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:39.987 16:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:39.987 16:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1724717 00:03:39.987 16:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:39.987 16:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:39.987 16:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1724717' 00:03:39.987 killing process with pid 1724717 00:03:39.987 16:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1724717 00:03:39.987 16:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1724717 00:03:40.247 16:05:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1724899 00:03:40.247 16:05:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:40.247 16:05:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:45.519 16:05:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1724899 00:03:45.519 16:05:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1724899 ']' 00:03:45.519 16:05:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1724899 00:03:45.519 16:05:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:45.519 16:05:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:45.519 16:05:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1724899 00:03:45.519 16:05:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:45.519 16:05:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:45.519 16:05:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1724899' 00:03:45.519 killing process with pid 1724899 00:03:45.519 16:05:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1724899 00:03:45.519 16:05:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1724899 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:45.778 00:03:45.778 real 0m6.307s 00:03:45.778 user 0m5.995s 00:03:45.778 sys 0m0.620s 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:45.778 ************************************ 00:03:45.778 END TEST skip_rpc_with_json 00:03:45.778 ************************************ 00:03:45.778 16:05:16 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:45.778 16:05:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.778 16:05:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.778 16:05:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.778 ************************************ 00:03:45.778 START TEST skip_rpc_with_delay 00:03:45.778 ************************************ 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:45.778 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:45.778 [2024-11-20 16:05:16.965085] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:45.779 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:45.779 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:45.779 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:45.779 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:45.779 00:03:45.779 real 0m0.071s 00:03:45.779 user 0m0.051s 00:03:45.779 sys 0m0.020s 00:03:45.779 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.779 16:05:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:45.779 ************************************ 00:03:45.779 END TEST skip_rpc_with_delay 00:03:45.779 ************************************ 00:03:46.038 16:05:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:46.038 16:05:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:46.038 16:05:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:46.038 16:05:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.038 16:05:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.038 16:05:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.038 ************************************ 00:03:46.038 START TEST exit_on_failed_rpc_init 00:03:46.038 ************************************ 00:03:46.038 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:46.038 16:05:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1725875 00:03:46.038 16:05:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1725875 00:03:46.038 16:05:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:46.038 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1725875 ']' 00:03:46.038 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:46.038 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:46.038 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:46.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:46.038 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:46.038 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:46.038 [2024-11-20 16:05:17.100530] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:03:46.038 [2024-11-20 16:05:17.100570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1725875 ] 00:03:46.038 [2024-11-20 16:05:17.178047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.038 [2024-11-20 16:05:17.219792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:46.297 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:46.297 [2024-11-20 16:05:17.491987] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:03:46.297 [2024-11-20 16:05:17.492032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1725950 ] 00:03:46.556 [2024-11-20 16:05:17.563192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.556 [2024-11-20 16:05:17.603549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:46.556 [2024-11-20 16:05:17.603603] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:46.556 [2024-11-20 16:05:17.603613] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:46.556 [2024-11-20 16:05:17.603621] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:46.556 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:46.556 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:46.556 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:46.556 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:46.556 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:46.556 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:46.556 16:05:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:46.556 16:05:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1725875 00:03:46.556 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1725875 ']' 00:03:46.556 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1725875 00:03:46.557 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:46.557 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:46.557 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1725875 00:03:46.557 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:46.557 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:46.557 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1725875' 00:03:46.557 killing process with pid 1725875 00:03:46.557 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1725875 00:03:46.557 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1725875 00:03:46.817 00:03:46.817 real 0m0.947s 00:03:46.817 user 0m0.995s 00:03:46.817 sys 0m0.390s 00:03:46.817 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.817 16:05:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:46.817 ************************************ 00:03:46.817 END TEST exit_on_failed_rpc_init 00:03:46.817 ************************************ 00:03:46.817 16:05:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:46.817 00:03:46.817 real 0m13.157s 00:03:46.817 user 0m12.385s 00:03:46.817 sys 0m1.587s 00:03:46.817 16:05:18 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.817 16:05:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.817 ************************************ 00:03:46.817 END TEST skip_rpc 00:03:46.817 ************************************ 00:03:47.077 16:05:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:47.077 16:05:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.077 16:05:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.077 16:05:18 -- common/autotest_common.sh@10 -- # set +x 00:03:47.077 ************************************ 00:03:47.077 START TEST rpc_client 00:03:47.077 ************************************ 00:03:47.077 16:05:18 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:47.077 * Looking for test storage... 00:03:47.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:47.077 16:05:18 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:47.077 16:05:18 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:47.077 16:05:18 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:47.077 16:05:18 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:47.077 16:05:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:47.077 16:05:18 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:47.077 16:05:18 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:47.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.077 --rc genhtml_branch_coverage=1 00:03:47.077 --rc genhtml_function_coverage=1 00:03:47.077 --rc genhtml_legend=1 00:03:47.077 --rc geninfo_all_blocks=1 00:03:47.077 --rc geninfo_unexecuted_blocks=1 00:03:47.077 00:03:47.077 ' 00:03:47.077 16:05:18 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:47.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.077 --rc genhtml_branch_coverage=1 00:03:47.077 --rc genhtml_function_coverage=1 00:03:47.077 --rc genhtml_legend=1 00:03:47.077 --rc geninfo_all_blocks=1 00:03:47.077 --rc geninfo_unexecuted_blocks=1 00:03:47.077 00:03:47.077 ' 00:03:47.077 16:05:18 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:47.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.077 --rc genhtml_branch_coverage=1 00:03:47.077 --rc genhtml_function_coverage=1 00:03:47.077 --rc genhtml_legend=1 00:03:47.077 --rc geninfo_all_blocks=1 00:03:47.077 --rc geninfo_unexecuted_blocks=1 00:03:47.077 00:03:47.077 ' 00:03:47.077 16:05:18 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:47.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.077 --rc genhtml_branch_coverage=1 00:03:47.077 --rc genhtml_function_coverage=1 00:03:47.077 --rc genhtml_legend=1 00:03:47.077 --rc geninfo_all_blocks=1 00:03:47.077 --rc geninfo_unexecuted_blocks=1 00:03:47.077 00:03:47.077 ' 00:03:47.077 16:05:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:47.077 OK 00:03:47.077 16:05:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:47.077 00:03:47.077 real 0m0.189s 00:03:47.077 user 0m0.118s 00:03:47.077 sys 0m0.084s 00:03:47.077 16:05:18 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.077 16:05:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:47.077 ************************************ 00:03:47.077 END TEST rpc_client 00:03:47.077 ************************************ 00:03:47.337 16:05:18 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:47.337 16:05:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.337 16:05:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.337 16:05:18 -- common/autotest_common.sh@10 -- # set +x 00:03:47.337 ************************************ 00:03:47.337 START TEST json_config 00:03:47.337 ************************************ 00:03:47.337 16:05:18 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:47.337 16:05:18 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:47.337 16:05:18 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:47.337 16:05:18 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:47.337 16:05:18 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:47.337 16:05:18 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:47.337 16:05:18 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:47.337 16:05:18 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:47.337 16:05:18 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:47.337 16:05:18 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:47.337 16:05:18 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:47.337 16:05:18 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:47.337 16:05:18 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:47.337 16:05:18 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:47.337 16:05:18 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:47.337 16:05:18 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:47.337 16:05:18 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:47.337 16:05:18 json_config -- scripts/common.sh@345 -- # : 1 00:03:47.337 16:05:18 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:47.337 16:05:18 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:47.337 16:05:18 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:47.337 16:05:18 json_config -- scripts/common.sh@353 -- # local d=1 00:03:47.337 16:05:18 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:47.337 16:05:18 json_config -- scripts/common.sh@355 -- # echo 1 00:03:47.337 16:05:18 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:47.337 16:05:18 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:47.337 16:05:18 json_config -- scripts/common.sh@353 -- # local d=2 00:03:47.337 16:05:18 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:47.337 16:05:18 json_config -- scripts/common.sh@355 -- # echo 2 00:03:47.337 16:05:18 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:47.337 16:05:18 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:47.337 16:05:18 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:47.337 16:05:18 json_config -- scripts/common.sh@368 -- # return 0 00:03:47.337 16:05:18 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:47.337 16:05:18 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:47.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.337 --rc genhtml_branch_coverage=1 00:03:47.337 --rc genhtml_function_coverage=1 00:03:47.337 --rc genhtml_legend=1 00:03:47.337 --rc geninfo_all_blocks=1 00:03:47.337 --rc geninfo_unexecuted_blocks=1 00:03:47.337 00:03:47.337 ' 00:03:47.337 16:05:18 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:47.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.337 --rc genhtml_branch_coverage=1 00:03:47.337 --rc genhtml_function_coverage=1 00:03:47.337 --rc genhtml_legend=1 00:03:47.337 --rc geninfo_all_blocks=1 00:03:47.337 --rc geninfo_unexecuted_blocks=1 00:03:47.337 00:03:47.337 ' 00:03:47.337 16:05:18 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:47.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.337 --rc genhtml_branch_coverage=1 00:03:47.337 --rc genhtml_function_coverage=1 00:03:47.337 --rc genhtml_legend=1 00:03:47.337 --rc geninfo_all_blocks=1 00:03:47.337 --rc geninfo_unexecuted_blocks=1 00:03:47.337 00:03:47.337 ' 00:03:47.337 16:05:18 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:47.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.337 --rc genhtml_branch_coverage=1 00:03:47.337 --rc genhtml_function_coverage=1 00:03:47.337 --rc genhtml_legend=1 00:03:47.337 --rc geninfo_all_blocks=1 00:03:47.337 --rc geninfo_unexecuted_blocks=1 00:03:47.337 00:03:47.337 ' 00:03:47.337 16:05:18 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:47.337 16:05:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:47.337 16:05:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:47.338 16:05:18 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:47.338 16:05:18 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:47.338 16:05:18 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:47.338 16:05:18 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:47.338 16:05:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.338 16:05:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.338 16:05:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.338 16:05:18 json_config -- paths/export.sh@5 -- # export PATH 00:03:47.338 16:05:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@51 -- # : 0 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:47.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:47.338 16:05:18 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:47.338 INFO: JSON configuration test init 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:47.338 16:05:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.338 16:05:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:47.338 16:05:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.338 16:05:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.338 16:05:18 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:47.338 16:05:18 json_config -- json_config/common.sh@9 -- # local app=target 00:03:47.338 16:05:18 json_config -- json_config/common.sh@10 -- # shift 00:03:47.338 16:05:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:47.338 16:05:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:47.338 16:05:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:47.338 16:05:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.338 16:05:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.338 16:05:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1726236 00:03:47.338 16:05:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:47.338 Waiting for target to run... 00:03:47.338 16:05:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:47.338 16:05:18 json_config -- json_config/common.sh@25 -- # waitforlisten 1726236 /var/tmp/spdk_tgt.sock 00:03:47.338 16:05:18 json_config -- common/autotest_common.sh@835 -- # '[' -z 1726236 ']' 00:03:47.338 16:05:18 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:47.338 16:05:18 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:47.338 16:05:18 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:47.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:47.338 16:05:18 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:47.338 16:05:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.598 [2024-11-20 16:05:18.605836] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:03:47.598 [2024-11-20 16:05:18.605881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1726236 ] 00:03:47.857 [2024-11-20 16:05:18.896269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.857 [2024-11-20 16:05:18.930246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.431 16:05:19 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:48.431 16:05:19 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:48.431 16:05:19 json_config -- json_config/common.sh@26 -- # echo '' 00:03:48.431 00:03:48.431 16:05:19 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:48.431 16:05:19 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:48.431 16:05:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:48.431 16:05:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.431 16:05:19 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:48.431 16:05:19 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:48.431 16:05:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:48.431 16:05:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.431 16:05:19 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:48.431 16:05:19 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:48.431 16:05:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:51.717 16:05:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.717 16:05:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:51.717 16:05:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@54 -- # sort 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:51.717 16:05:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.717 16:05:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:51.717 16:05:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.717 16:05:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:51.717 16:05:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:51.717 16:05:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:51.976 MallocForNvmf0 00:03:51.976 16:05:23 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:51.976 16:05:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:51.976 MallocForNvmf1 00:03:51.976 16:05:23 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:51.976 16:05:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:52.234 [2024-11-20 16:05:23.362342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:52.234 16:05:23 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:52.234 16:05:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:52.491 16:05:23 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:52.491 16:05:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:52.749 16:05:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:52.749 16:05:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:52.749 16:05:23 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:52.749 16:05:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:53.006 [2024-11-20 16:05:24.128726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:53.006 16:05:24 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:53.006 16:05:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.006 16:05:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.006 16:05:24 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:53.006 16:05:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.006 16:05:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.006 16:05:24 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:53.006 16:05:24 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:53.006 16:05:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:53.264 MallocBdevForConfigChangeCheck 00:03:53.264 16:05:24 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:53.264 16:05:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.264 16:05:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.264 16:05:24 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:53.264 16:05:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.829 16:05:24 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:53.829 INFO: shutting down applications... 00:03:53.829 16:05:24 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:53.829 16:05:24 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:53.829 16:05:24 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:53.829 16:05:24 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:55.729 Calling clear_iscsi_subsystem 00:03:55.729 Calling clear_nvmf_subsystem 00:03:55.729 Calling clear_nbd_subsystem 00:03:55.729 Calling clear_ublk_subsystem 00:03:55.729 Calling clear_vhost_blk_subsystem 00:03:55.729 Calling clear_vhost_scsi_subsystem 00:03:55.729 Calling clear_bdev_subsystem 00:03:55.988 16:05:26 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:55.988 16:05:26 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:55.988 16:05:26 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:55.988 16:05:26 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.988 16:05:26 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:55.988 16:05:26 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:56.247 16:05:27 json_config -- json_config/json_config.sh@352 -- # break 00:03:56.247 16:05:27 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:56.247 16:05:27 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:56.247 16:05:27 json_config -- json_config/common.sh@31 -- # local app=target 00:03:56.247 16:05:27 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:56.247 16:05:27 json_config -- json_config/common.sh@35 -- # [[ -n 1726236 ]] 00:03:56.247 16:05:27 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1726236 00:03:56.247 16:05:27 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:56.247 16:05:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:56.247 16:05:27 json_config -- json_config/common.sh@41 -- # kill -0 1726236 00:03:56.247 16:05:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:56.816 16:05:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:56.816 16:05:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:56.816 16:05:27 json_config -- json_config/common.sh@41 -- # kill -0 1726236 00:03:56.816 16:05:27 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:56.816 16:05:27 json_config -- json_config/common.sh@43 -- # break 00:03:56.816 16:05:27 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:56.816 16:05:27 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:56.816 SPDK target shutdown done 00:03:56.816 16:05:27 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:56.816 INFO: relaunching applications... 00:03:56.816 16:05:27 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.816 16:05:27 json_config -- json_config/common.sh@9 -- # local app=target 00:03:56.816 16:05:27 json_config -- json_config/common.sh@10 -- # shift 00:03:56.816 16:05:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:56.816 16:05:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:56.816 16:05:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:56.816 16:05:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.816 16:05:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.816 16:05:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1727972 00:03:56.816 16:05:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:56.816 Waiting for target to run... 00:03:56.816 16:05:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.816 16:05:27 json_config -- json_config/common.sh@25 -- # waitforlisten 1727972 /var/tmp/spdk_tgt.sock 00:03:56.816 16:05:27 json_config -- common/autotest_common.sh@835 -- # '[' -z 1727972 ']' 00:03:56.816 16:05:27 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:56.816 16:05:27 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.816 16:05:27 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:56.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:56.816 16:05:27 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.816 16:05:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.816 [2024-11-20 16:05:27.892474] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:03:56.816 [2024-11-20 16:05:27.892542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1727972 ] 00:03:57.395 [2024-11-20 16:05:28.350029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.395 [2024-11-20 16:05:28.402949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.685 [2024-11-20 16:05:31.436006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:00.685 [2024-11-20 16:05:31.468372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:00.944 16:05:32 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.944 16:05:32 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:00.944 16:05:32 json_config -- json_config/common.sh@26 -- # echo '' 00:04:00.944 00:04:00.944 16:05:32 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:00.944 16:05:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:00.944 INFO: Checking if target configuration is the same... 00:04:00.944 16:05:32 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:00.944 16:05:32 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.944 16:05:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:00.944 + '[' 2 -ne 2 ']' 00:04:00.944 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:00.944 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:00.944 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.944 +++ basename /dev/fd/62 00:04:00.944 ++ mktemp /tmp/62.XXX 00:04:00.944 + tmp_file_1=/tmp/62.3cd 00:04:00.944 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.944 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:00.944 + tmp_file_2=/tmp/spdk_tgt_config.json.0T2 00:04:00.944 + ret=0 00:04:00.944 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:01.512 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:01.512 + diff -u /tmp/62.3cd /tmp/spdk_tgt_config.json.0T2 00:04:01.512 + echo 'INFO: JSON config files are the same' 00:04:01.512 INFO: JSON config files are the same 00:04:01.512 + rm /tmp/62.3cd /tmp/spdk_tgt_config.json.0T2 00:04:01.512 + exit 0 00:04:01.512 16:05:32 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:01.512 16:05:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:01.512 INFO: changing configuration and checking if this can be detected... 00:04:01.512 16:05:32 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:01.512 16:05:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:01.512 16:05:32 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.512 16:05:32 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:01.512 16:05:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:01.512 + '[' 2 -ne 2 ']' 00:04:01.512 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:01.512 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:01.512 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:01.512 +++ basename /dev/fd/62 00:04:01.512 ++ mktemp /tmp/62.XXX 00:04:01.512 + tmp_file_1=/tmp/62.uWy 00:04:01.512 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.771 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:01.771 + tmp_file_2=/tmp/spdk_tgt_config.json.U2M 00:04:01.771 + ret=0 00:04:01.771 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:02.031 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:02.031 + diff -u /tmp/62.uWy /tmp/spdk_tgt_config.json.U2M 00:04:02.031 + ret=1 00:04:02.031 + echo '=== Start of file: /tmp/62.uWy ===' 00:04:02.031 + cat /tmp/62.uWy 00:04:02.031 + echo '=== End of file: /tmp/62.uWy ===' 00:04:02.031 + echo '' 00:04:02.031 + echo '=== Start of file: /tmp/spdk_tgt_config.json.U2M ===' 00:04:02.031 + cat /tmp/spdk_tgt_config.json.U2M 00:04:02.031 + echo '=== End of file: /tmp/spdk_tgt_config.json.U2M ===' 00:04:02.031 + echo '' 00:04:02.031 + rm /tmp/62.uWy /tmp/spdk_tgt_config.json.U2M 00:04:02.031 + exit 1 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:02.031 INFO: configuration change detected. 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@324 -- # [[ -n 1727972 ]] 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.031 16:05:33 json_config -- json_config/json_config.sh@330 -- # killprocess 1727972 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@954 -- # '[' -z 1727972 ']' 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@958 -- # kill -0 1727972 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@959 -- # uname 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1727972 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1727972' 00:04:02.031 killing process with pid 1727972 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@973 -- # kill 1727972 00:04:02.031 16:05:33 json_config -- common/autotest_common.sh@978 -- # wait 1727972 00:04:04.568 16:05:35 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.568 16:05:35 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:04.568 16:05:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.568 16:05:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.568 16:05:35 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:04.568 16:05:35 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:04.568 INFO: Success 00:04:04.568 00:04:04.568 real 0m16.994s 00:04:04.568 user 0m17.577s 00:04:04.568 sys 0m2.596s 00:04:04.568 16:05:35 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.568 16:05:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.568 ************************************ 00:04:04.568 END TEST json_config 00:04:04.568 ************************************ 00:04:04.568 16:05:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:04.568 16:05:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.568 16:05:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.568 16:05:35 -- common/autotest_common.sh@10 -- # set +x 00:04:04.568 ************************************ 00:04:04.568 START TEST json_config_extra_key 00:04:04.568 ************************************ 00:04:04.568 16:05:35 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:04.568 16:05:35 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:04.568 16:05:35 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:04.568 16:05:35 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:04.568 16:05:35 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.568 16:05:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:04.568 16:05:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.568 16:05:35 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.568 --rc genhtml_branch_coverage=1 00:04:04.568 --rc genhtml_function_coverage=1 00:04:04.568 --rc genhtml_legend=1 00:04:04.568 --rc geninfo_all_blocks=1 00:04:04.568 --rc geninfo_unexecuted_blocks=1 00:04:04.568 00:04:04.568 ' 00:04:04.568 16:05:35 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.568 --rc genhtml_branch_coverage=1 00:04:04.568 --rc genhtml_function_coverage=1 00:04:04.568 --rc genhtml_legend=1 00:04:04.568 --rc geninfo_all_blocks=1 00:04:04.568 --rc geninfo_unexecuted_blocks=1 00:04:04.568 00:04:04.568 ' 00:04:04.568 16:05:35 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.568 --rc genhtml_branch_coverage=1 00:04:04.568 --rc genhtml_function_coverage=1 00:04:04.568 --rc genhtml_legend=1 00:04:04.568 --rc geninfo_all_blocks=1 00:04:04.568 --rc geninfo_unexecuted_blocks=1 00:04:04.568 00:04:04.568 ' 00:04:04.568 16:05:35 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.568 --rc genhtml_branch_coverage=1 00:04:04.568 --rc genhtml_function_coverage=1 00:04:04.568 --rc genhtml_legend=1 00:04:04.568 --rc geninfo_all_blocks=1 00:04:04.568 --rc geninfo_unexecuted_blocks=1 00:04:04.568 00:04:04.568 ' 00:04:04.568 16:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:04.568 16:05:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:04.568 16:05:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.568 16:05:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.568 16:05:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.568 16:05:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.568 16:05:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.568 16:05:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.568 16:05:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.568 16:05:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.568 16:05:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.568 16:05:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:04.569 16:05:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:04.569 16:05:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.569 16:05:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.569 16:05:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.569 16:05:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.569 16:05:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.569 16:05:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.569 16:05:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:04.569 16:05:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:04.569 16:05:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:04.569 16:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:04.569 16:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:04.569 16:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:04.569 16:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:04.569 16:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:04.569 16:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:04.569 16:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:04.569 16:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:04.569 16:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:04.569 16:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:04.569 16:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:04.569 INFO: launching applications... 00:04:04.569 16:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:04.569 16:05:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:04.569 16:05:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:04.569 16:05:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.569 16:05:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.569 16:05:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.569 16:05:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.569 16:05:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.569 16:05:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1729475 00:04:04.569 16:05:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.569 Waiting for target to run... 00:04:04.569 16:05:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1729475 /var/tmp/spdk_tgt.sock 00:04:04.569 16:05:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:04.569 16:05:35 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1729475 ']' 00:04:04.569 16:05:35 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.569 16:05:35 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.569 16:05:35 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.569 16:05:35 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.569 16:05:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:04.569 [2024-11-20 16:05:35.661045] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:04.569 [2024-11-20 16:05:35.661089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729475 ] 00:04:04.828 [2024-11-20 16:05:35.941943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.828 [2024-11-20 16:05:35.973005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.396 16:05:36 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.396 16:05:36 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:05.396 16:05:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:05.396 00:04:05.396 16:05:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:05.396 INFO: shutting down applications... 00:04:05.396 16:05:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:05.396 16:05:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:05.396 16:05:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:05.396 16:05:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1729475 ]] 00:04:05.396 16:05:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1729475 00:04:05.396 16:05:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:05.396 16:05:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.396 16:05:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1729475 00:04:05.396 16:05:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:05.965 16:05:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:05.965 16:05:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.965 16:05:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1729475 00:04:05.965 16:05:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:05.965 16:05:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:05.965 16:05:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:05.965 16:05:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:05.965 SPDK target shutdown done 00:04:05.965 16:05:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:05.965 Success 00:04:05.965 00:04:05.965 real 0m1.565s 00:04:05.965 user 0m1.347s 00:04:05.965 sys 0m0.391s 00:04:05.965 16:05:36 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.965 16:05:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:05.965 ************************************ 00:04:05.965 END TEST json_config_extra_key 00:04:05.965 ************************************ 00:04:05.965 16:05:37 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:05.965 16:05:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.965 16:05:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.965 16:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:05.965 ************************************ 00:04:05.965 START TEST alias_rpc 00:04:05.965 ************************************ 00:04:05.965 16:05:37 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:05.965 * Looking for test storage... 00:04:05.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:05.965 16:05:37 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:05.965 16:05:37 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:05.965 16:05:37 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.224 16:05:37 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.224 16:05:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.224 16:05:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.224 16:05:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.224 16:05:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.225 16:05:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:06.225 16:05:37 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.225 16:05:37 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.225 --rc genhtml_branch_coverage=1 00:04:06.225 --rc genhtml_function_coverage=1 00:04:06.225 --rc genhtml_legend=1 00:04:06.225 --rc geninfo_all_blocks=1 00:04:06.225 --rc geninfo_unexecuted_blocks=1 00:04:06.225 00:04:06.225 ' 00:04:06.225 16:05:37 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.225 --rc genhtml_branch_coverage=1 00:04:06.225 --rc genhtml_function_coverage=1 00:04:06.225 --rc genhtml_legend=1 00:04:06.225 --rc geninfo_all_blocks=1 00:04:06.225 --rc geninfo_unexecuted_blocks=1 00:04:06.225 00:04:06.225 ' 00:04:06.225 16:05:37 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.225 --rc genhtml_branch_coverage=1 00:04:06.225 --rc genhtml_function_coverage=1 00:04:06.225 --rc genhtml_legend=1 00:04:06.225 --rc geninfo_all_blocks=1 00:04:06.225 --rc geninfo_unexecuted_blocks=1 00:04:06.225 00:04:06.225 ' 00:04:06.225 16:05:37 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.225 --rc genhtml_branch_coverage=1 00:04:06.225 --rc genhtml_function_coverage=1 00:04:06.225 --rc genhtml_legend=1 00:04:06.225 --rc geninfo_all_blocks=1 00:04:06.225 --rc geninfo_unexecuted_blocks=1 00:04:06.225 00:04:06.225 ' 00:04:06.225 16:05:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:06.225 16:05:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1729769 00:04:06.225 16:05:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.225 16:05:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1729769 00:04:06.225 16:05:37 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1729769 ']' 00:04:06.225 16:05:37 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.225 16:05:37 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.225 16:05:37 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.225 16:05:37 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.225 16:05:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.225 [2024-11-20 16:05:37.282923] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:06.225 [2024-11-20 16:05:37.282974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729769 ] 00:04:06.225 [2024-11-20 16:05:37.343833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.225 [2024-11-20 16:05:37.383791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.484 16:05:37 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.484 16:05:37 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:06.484 16:05:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:06.743 16:05:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1729769 00:04:06.743 16:05:37 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1729769 ']' 00:04:06.743 16:05:37 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1729769 00:04:06.743 16:05:37 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:06.743 16:05:37 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.743 16:05:37 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729769 00:04:06.743 16:05:37 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.743 16:05:37 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.743 16:05:37 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729769' 00:04:06.743 killing process with pid 1729769 00:04:06.743 16:05:37 alias_rpc -- common/autotest_common.sh@973 -- # kill 1729769 00:04:06.743 16:05:37 alias_rpc -- common/autotest_common.sh@978 -- # wait 1729769 00:04:07.002 00:04:07.002 real 0m1.126s 00:04:07.002 user 0m1.147s 00:04:07.002 sys 0m0.416s 00:04:07.002 16:05:38 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.002 16:05:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.002 ************************************ 00:04:07.002 END TEST alias_rpc 00:04:07.002 ************************************ 00:04:07.002 16:05:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:07.002 16:05:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:07.002 16:05:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.002 16:05:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.002 16:05:38 -- common/autotest_common.sh@10 -- # set +x 00:04:07.262 ************************************ 00:04:07.262 START TEST spdkcli_tcp 00:04:07.262 ************************************ 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:07.262 * Looking for test storage... 00:04:07.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.262 16:05:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:07.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.262 --rc genhtml_branch_coverage=1 00:04:07.262 --rc genhtml_function_coverage=1 00:04:07.262 --rc genhtml_legend=1 00:04:07.262 --rc geninfo_all_blocks=1 00:04:07.262 --rc geninfo_unexecuted_blocks=1 00:04:07.262 00:04:07.262 ' 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:07.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.262 --rc genhtml_branch_coverage=1 00:04:07.262 --rc genhtml_function_coverage=1 00:04:07.262 --rc genhtml_legend=1 00:04:07.262 --rc geninfo_all_blocks=1 00:04:07.262 --rc geninfo_unexecuted_blocks=1 00:04:07.262 00:04:07.262 ' 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:07.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.262 --rc genhtml_branch_coverage=1 00:04:07.262 --rc genhtml_function_coverage=1 00:04:07.262 --rc genhtml_legend=1 00:04:07.262 --rc geninfo_all_blocks=1 00:04:07.262 --rc geninfo_unexecuted_blocks=1 00:04:07.262 00:04:07.262 ' 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:07.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.262 --rc genhtml_branch_coverage=1 00:04:07.262 --rc genhtml_function_coverage=1 00:04:07.262 --rc genhtml_legend=1 00:04:07.262 --rc geninfo_all_blocks=1 00:04:07.262 --rc geninfo_unexecuted_blocks=1 00:04:07.262 00:04:07.262 ' 00:04:07.262 16:05:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:07.262 16:05:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:07.262 16:05:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:07.262 16:05:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:07.262 16:05:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:07.262 16:05:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:07.262 16:05:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:07.262 16:05:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1730061 00:04:07.262 16:05:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1730061 00:04:07.262 16:05:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1730061 ']' 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.262 16:05:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:07.263 [2024-11-20 16:05:38.492136] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:07.263 [2024-11-20 16:05:38.492185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730061 ] 00:04:07.522 [2024-11-20 16:05:38.563399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:07.522 [2024-11-20 16:05:38.604029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.522 [2024-11-20 16:05:38.604030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.781 16:05:38 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.781 16:05:38 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:07.781 16:05:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1730067 00:04:07.781 16:05:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:07.781 16:05:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:07.781 [ 00:04:07.781 "bdev_malloc_delete", 00:04:07.781 "bdev_malloc_create", 00:04:07.781 "bdev_null_resize", 00:04:07.781 "bdev_null_delete", 00:04:07.781 "bdev_null_create", 00:04:07.781 "bdev_nvme_cuse_unregister", 00:04:07.781 "bdev_nvme_cuse_register", 00:04:07.781 "bdev_opal_new_user", 00:04:07.781 "bdev_opal_set_lock_state", 00:04:07.781 "bdev_opal_delete", 00:04:07.781 "bdev_opal_get_info", 00:04:07.781 "bdev_opal_create", 00:04:07.781 "bdev_nvme_opal_revert", 00:04:07.781 "bdev_nvme_opal_init", 00:04:07.781 "bdev_nvme_send_cmd", 00:04:07.781 "bdev_nvme_set_keys", 00:04:07.781 "bdev_nvme_get_path_iostat", 00:04:07.781 "bdev_nvme_get_mdns_discovery_info", 00:04:07.781 "bdev_nvme_stop_mdns_discovery", 00:04:07.781 "bdev_nvme_start_mdns_discovery", 00:04:07.781 "bdev_nvme_set_multipath_policy", 00:04:07.781 "bdev_nvme_set_preferred_path", 00:04:07.781 "bdev_nvme_get_io_paths", 00:04:07.781 "bdev_nvme_remove_error_injection", 00:04:07.781 "bdev_nvme_add_error_injection", 00:04:07.781 "bdev_nvme_get_discovery_info", 00:04:07.781 "bdev_nvme_stop_discovery", 00:04:07.781 "bdev_nvme_start_discovery", 00:04:07.781 "bdev_nvme_get_controller_health_info", 00:04:07.781 "bdev_nvme_disable_controller", 00:04:07.781 "bdev_nvme_enable_controller", 00:04:07.781 "bdev_nvme_reset_controller", 00:04:07.781 "bdev_nvme_get_transport_statistics", 00:04:07.781 "bdev_nvme_apply_firmware", 00:04:07.781 "bdev_nvme_detach_controller", 00:04:07.781 "bdev_nvme_get_controllers", 00:04:07.781 "bdev_nvme_attach_controller", 00:04:07.781 "bdev_nvme_set_hotplug", 00:04:07.781 "bdev_nvme_set_options", 00:04:07.781 "bdev_passthru_delete", 00:04:07.781 "bdev_passthru_create", 00:04:07.781 "bdev_lvol_set_parent_bdev", 00:04:07.781 "bdev_lvol_set_parent", 00:04:07.781 "bdev_lvol_check_shallow_copy", 00:04:07.781 "bdev_lvol_start_shallow_copy", 00:04:07.781 "bdev_lvol_grow_lvstore", 00:04:07.781 "bdev_lvol_get_lvols", 00:04:07.781 "bdev_lvol_get_lvstores", 00:04:07.781 "bdev_lvol_delete", 00:04:07.781 "bdev_lvol_set_read_only", 00:04:07.781 "bdev_lvol_resize", 00:04:07.782 "bdev_lvol_decouple_parent", 00:04:07.782 "bdev_lvol_inflate", 00:04:07.782 "bdev_lvol_rename", 00:04:07.782 "bdev_lvol_clone_bdev", 00:04:07.782 "bdev_lvol_clone", 00:04:07.782 "bdev_lvol_snapshot", 00:04:07.782 "bdev_lvol_create", 00:04:07.782 "bdev_lvol_delete_lvstore", 00:04:07.782 "bdev_lvol_rename_lvstore", 00:04:07.782 "bdev_lvol_create_lvstore", 00:04:07.782 "bdev_raid_set_options", 00:04:07.782 "bdev_raid_remove_base_bdev", 00:04:07.782 "bdev_raid_add_base_bdev", 00:04:07.782 "bdev_raid_delete", 00:04:07.782 "bdev_raid_create", 00:04:07.782 "bdev_raid_get_bdevs", 00:04:07.782 "bdev_error_inject_error", 00:04:07.782 "bdev_error_delete", 00:04:07.782 "bdev_error_create", 00:04:07.782 "bdev_split_delete", 00:04:07.782 "bdev_split_create", 00:04:07.782 "bdev_delay_delete", 00:04:07.782 "bdev_delay_create", 00:04:07.782 "bdev_delay_update_latency", 00:04:07.782 "bdev_zone_block_delete", 00:04:07.782 "bdev_zone_block_create", 00:04:07.782 "blobfs_create", 00:04:07.782 "blobfs_detect", 00:04:07.782 "blobfs_set_cache_size", 00:04:07.782 "bdev_aio_delete", 00:04:07.782 "bdev_aio_rescan", 00:04:07.782 "bdev_aio_create", 00:04:07.782 "bdev_ftl_set_property", 00:04:07.782 "bdev_ftl_get_properties", 00:04:07.782 "bdev_ftl_get_stats", 00:04:07.782 "bdev_ftl_unmap", 00:04:07.782 "bdev_ftl_unload", 00:04:07.782 "bdev_ftl_delete", 00:04:07.782 "bdev_ftl_load", 00:04:07.782 "bdev_ftl_create", 00:04:07.782 "bdev_virtio_attach_controller", 00:04:07.782 "bdev_virtio_scsi_get_devices", 00:04:07.782 "bdev_virtio_detach_controller", 00:04:07.782 "bdev_virtio_blk_set_hotplug", 00:04:07.782 "bdev_iscsi_delete", 00:04:07.782 "bdev_iscsi_create", 00:04:07.782 "bdev_iscsi_set_options", 00:04:07.782 "accel_error_inject_error", 00:04:07.782 "ioat_scan_accel_module", 00:04:07.782 "dsa_scan_accel_module", 00:04:07.782 "iaa_scan_accel_module", 00:04:07.782 "vfu_virtio_create_fs_endpoint", 00:04:07.782 "vfu_virtio_create_scsi_endpoint", 00:04:07.782 "vfu_virtio_scsi_remove_target", 00:04:07.782 "vfu_virtio_scsi_add_target", 00:04:07.782 "vfu_virtio_create_blk_endpoint", 00:04:07.782 "vfu_virtio_delete_endpoint", 00:04:07.782 "keyring_file_remove_key", 00:04:07.782 "keyring_file_add_key", 00:04:07.782 "keyring_linux_set_options", 00:04:07.782 "fsdev_aio_delete", 00:04:07.782 "fsdev_aio_create", 00:04:07.782 "iscsi_get_histogram", 00:04:07.782 "iscsi_enable_histogram", 00:04:07.782 "iscsi_set_options", 00:04:07.782 "iscsi_get_auth_groups", 00:04:07.782 "iscsi_auth_group_remove_secret", 00:04:07.782 "iscsi_auth_group_add_secret", 00:04:07.782 "iscsi_delete_auth_group", 00:04:07.782 "iscsi_create_auth_group", 00:04:07.782 "iscsi_set_discovery_auth", 00:04:07.782 "iscsi_get_options", 00:04:07.782 "iscsi_target_node_request_logout", 00:04:07.782 "iscsi_target_node_set_redirect", 00:04:07.782 "iscsi_target_node_set_auth", 00:04:07.782 "iscsi_target_node_add_lun", 00:04:07.782 "iscsi_get_stats", 00:04:07.782 "iscsi_get_connections", 00:04:07.782 "iscsi_portal_group_set_auth", 00:04:07.782 "iscsi_start_portal_group", 00:04:07.782 "iscsi_delete_portal_group", 00:04:07.782 "iscsi_create_portal_group", 00:04:07.782 "iscsi_get_portal_groups", 00:04:07.782 "iscsi_delete_target_node", 00:04:07.782 "iscsi_target_node_remove_pg_ig_maps", 00:04:07.782 "iscsi_target_node_add_pg_ig_maps", 00:04:07.782 "iscsi_create_target_node", 00:04:07.782 "iscsi_get_target_nodes", 00:04:07.782 "iscsi_delete_initiator_group", 00:04:07.782 "iscsi_initiator_group_remove_initiators", 00:04:07.782 "iscsi_initiator_group_add_initiators", 00:04:07.782 "iscsi_create_initiator_group", 00:04:07.782 "iscsi_get_initiator_groups", 00:04:07.782 "nvmf_set_crdt", 00:04:07.782 "nvmf_set_config", 00:04:07.782 "nvmf_set_max_subsystems", 00:04:07.782 "nvmf_stop_mdns_prr", 00:04:07.782 "nvmf_publish_mdns_prr", 00:04:07.782 "nvmf_subsystem_get_listeners", 00:04:07.782 "nvmf_subsystem_get_qpairs", 00:04:07.782 "nvmf_subsystem_get_controllers", 00:04:07.782 "nvmf_get_stats", 00:04:07.782 "nvmf_get_transports", 00:04:07.782 "nvmf_create_transport", 00:04:07.782 "nvmf_get_targets", 00:04:07.782 "nvmf_delete_target", 00:04:07.782 "nvmf_create_target", 00:04:07.782 "nvmf_subsystem_allow_any_host", 00:04:07.782 "nvmf_subsystem_set_keys", 00:04:07.782 "nvmf_subsystem_remove_host", 00:04:07.782 "nvmf_subsystem_add_host", 00:04:07.782 "nvmf_ns_remove_host", 00:04:07.782 "nvmf_ns_add_host", 00:04:07.782 "nvmf_subsystem_remove_ns", 00:04:07.782 "nvmf_subsystem_set_ns_ana_group", 00:04:07.782 "nvmf_subsystem_add_ns", 00:04:07.782 "nvmf_subsystem_listener_set_ana_state", 00:04:07.782 "nvmf_discovery_get_referrals", 00:04:07.782 "nvmf_discovery_remove_referral", 00:04:07.782 "nvmf_discovery_add_referral", 00:04:07.782 "nvmf_subsystem_remove_listener", 00:04:07.782 "nvmf_subsystem_add_listener", 00:04:07.782 "nvmf_delete_subsystem", 00:04:07.782 "nvmf_create_subsystem", 00:04:07.782 "nvmf_get_subsystems", 00:04:07.782 "env_dpdk_get_mem_stats", 00:04:07.782 "nbd_get_disks", 00:04:07.782 "nbd_stop_disk", 00:04:07.782 "nbd_start_disk", 00:04:07.782 "ublk_recover_disk", 00:04:07.782 "ublk_get_disks", 00:04:07.782 "ublk_stop_disk", 00:04:07.782 "ublk_start_disk", 00:04:07.782 "ublk_destroy_target", 00:04:07.782 "ublk_create_target", 00:04:07.782 "virtio_blk_create_transport", 00:04:07.782 "virtio_blk_get_transports", 00:04:07.782 "vhost_controller_set_coalescing", 00:04:07.782 "vhost_get_controllers", 00:04:07.782 "vhost_delete_controller", 00:04:07.782 "vhost_create_blk_controller", 00:04:07.782 "vhost_scsi_controller_remove_target", 00:04:07.782 "vhost_scsi_controller_add_target", 00:04:07.782 "vhost_start_scsi_controller", 00:04:07.782 "vhost_create_scsi_controller", 00:04:07.782 "thread_set_cpumask", 00:04:07.782 "scheduler_set_options", 00:04:07.782 "framework_get_governor", 00:04:07.782 "framework_get_scheduler", 00:04:07.782 "framework_set_scheduler", 00:04:07.782 "framework_get_reactors", 00:04:07.782 "thread_get_io_channels", 00:04:07.782 "thread_get_pollers", 00:04:07.782 "thread_get_stats", 00:04:07.782 "framework_monitor_context_switch", 00:04:07.782 "spdk_kill_instance", 00:04:07.782 "log_enable_timestamps", 00:04:07.782 "log_get_flags", 00:04:07.782 "log_clear_flag", 00:04:07.782 "log_set_flag", 00:04:07.782 "log_get_level", 00:04:07.782 "log_set_level", 00:04:07.782 "log_get_print_level", 00:04:07.782 "log_set_print_level", 00:04:07.782 "framework_enable_cpumask_locks", 00:04:07.782 "framework_disable_cpumask_locks", 00:04:07.782 "framework_wait_init", 00:04:07.782 "framework_start_init", 00:04:07.782 "scsi_get_devices", 00:04:07.782 "bdev_get_histogram", 00:04:07.782 "bdev_enable_histogram", 00:04:07.782 "bdev_set_qos_limit", 00:04:07.782 "bdev_set_qd_sampling_period", 00:04:07.782 "bdev_get_bdevs", 00:04:07.782 "bdev_reset_iostat", 00:04:07.782 "bdev_get_iostat", 00:04:07.782 "bdev_examine", 00:04:07.782 "bdev_wait_for_examine", 00:04:07.782 "bdev_set_options", 00:04:07.782 "accel_get_stats", 00:04:07.782 "accel_set_options", 00:04:07.782 "accel_set_driver", 00:04:07.782 "accel_crypto_key_destroy", 00:04:07.782 "accel_crypto_keys_get", 00:04:07.782 "accel_crypto_key_create", 00:04:07.782 "accel_assign_opc", 00:04:07.782 "accel_get_module_info", 00:04:07.783 "accel_get_opc_assignments", 00:04:07.783 "vmd_rescan", 00:04:07.783 "vmd_remove_device", 00:04:07.783 "vmd_enable", 00:04:07.783 "sock_get_default_impl", 00:04:07.783 "sock_set_default_impl", 00:04:07.783 "sock_impl_set_options", 00:04:07.783 "sock_impl_get_options", 00:04:07.783 "iobuf_get_stats", 00:04:07.783 "iobuf_set_options", 00:04:07.783 "keyring_get_keys", 00:04:07.783 "vfu_tgt_set_base_path", 00:04:07.783 "framework_get_pci_devices", 00:04:07.783 "framework_get_config", 00:04:07.783 "framework_get_subsystems", 00:04:07.783 "fsdev_set_opts", 00:04:07.783 "fsdev_get_opts", 00:04:07.783 "trace_get_info", 00:04:07.783 "trace_get_tpoint_group_mask", 00:04:07.783 "trace_disable_tpoint_group", 00:04:07.783 "trace_enable_tpoint_group", 00:04:07.783 "trace_clear_tpoint_mask", 00:04:07.783 "trace_set_tpoint_mask", 00:04:07.783 "notify_get_notifications", 00:04:07.783 "notify_get_types", 00:04:07.783 "spdk_get_version", 00:04:07.783 "rpc_get_methods" 00:04:07.783 ] 00:04:08.042 16:05:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:08.042 16:05:39 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.042 16:05:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.042 16:05:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:08.042 16:05:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1730061 00:04:08.042 16:05:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1730061 ']' 00:04:08.042 16:05:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1730061 00:04:08.042 16:05:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:08.043 16:05:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.043 16:05:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1730061 00:04:08.043 16:05:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.043 16:05:39 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.043 16:05:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1730061' 00:04:08.043 killing process with pid 1730061 00:04:08.043 16:05:39 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1730061 00:04:08.043 16:05:39 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1730061 00:04:08.300 00:04:08.300 real 0m1.153s 00:04:08.300 user 0m1.943s 00:04:08.300 sys 0m0.438s 00:04:08.300 16:05:39 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.300 16:05:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.300 ************************************ 00:04:08.300 END TEST spdkcli_tcp 00:04:08.300 ************************************ 00:04:08.300 16:05:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:08.300 16:05:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.300 16:05:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.300 16:05:39 -- common/autotest_common.sh@10 -- # set +x 00:04:08.300 ************************************ 00:04:08.300 START TEST dpdk_mem_utility 00:04:08.300 ************************************ 00:04:08.300 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:08.559 * Looking for test storage... 00:04:08.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.560 16:05:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:08.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.560 --rc genhtml_branch_coverage=1 00:04:08.560 --rc genhtml_function_coverage=1 00:04:08.560 --rc genhtml_legend=1 00:04:08.560 --rc geninfo_all_blocks=1 00:04:08.560 --rc geninfo_unexecuted_blocks=1 00:04:08.560 00:04:08.560 ' 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:08.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.560 --rc genhtml_branch_coverage=1 00:04:08.560 --rc genhtml_function_coverage=1 00:04:08.560 --rc genhtml_legend=1 00:04:08.560 --rc geninfo_all_blocks=1 00:04:08.560 --rc geninfo_unexecuted_blocks=1 00:04:08.560 00:04:08.560 ' 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:08.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.560 --rc genhtml_branch_coverage=1 00:04:08.560 --rc genhtml_function_coverage=1 00:04:08.560 --rc genhtml_legend=1 00:04:08.560 --rc geninfo_all_blocks=1 00:04:08.560 --rc geninfo_unexecuted_blocks=1 00:04:08.560 00:04:08.560 ' 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:08.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.560 --rc genhtml_branch_coverage=1 00:04:08.560 --rc genhtml_function_coverage=1 00:04:08.560 --rc genhtml_legend=1 00:04:08.560 --rc geninfo_all_blocks=1 00:04:08.560 --rc geninfo_unexecuted_blocks=1 00:04:08.560 00:04:08.560 ' 00:04:08.560 16:05:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:08.560 16:05:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1730344 00:04:08.560 16:05:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1730344 00:04:08.560 16:05:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1730344 ']' 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.560 16:05:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:08.560 [2024-11-20 16:05:39.700528] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:08.560 [2024-11-20 16:05:39.700581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730344 ] 00:04:08.560 [2024-11-20 16:05:39.774123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.820 [2024-11-20 16:05:39.815068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.388 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.389 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:09.389 16:05:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:09.389 16:05:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:09.389 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.389 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:09.389 { 00:04:09.389 "filename": "/tmp/spdk_mem_dump.txt" 00:04:09.389 } 00:04:09.389 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.389 16:05:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:09.389 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:09.389 1 heaps totaling size 818.000000 MiB 00:04:09.389 size: 818.000000 MiB heap id: 0 00:04:09.389 end heaps---------- 00:04:09.389 9 mempools totaling size 603.782043 MiB 00:04:09.389 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:09.389 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:09.389 size: 100.555481 MiB name: bdev_io_1730344 00:04:09.389 size: 50.003479 MiB name: msgpool_1730344 00:04:09.389 size: 36.509338 MiB name: fsdev_io_1730344 00:04:09.389 size: 21.763794 MiB name: PDU_Pool 00:04:09.389 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:09.389 size: 4.133484 MiB name: evtpool_1730344 00:04:09.389 size: 0.026123 MiB name: Session_Pool 00:04:09.389 end mempools------- 00:04:09.389 6 memzones totaling size 4.142822 MiB 00:04:09.389 size: 1.000366 MiB name: RG_ring_0_1730344 00:04:09.389 size: 1.000366 MiB name: RG_ring_1_1730344 00:04:09.389 size: 1.000366 MiB name: RG_ring_4_1730344 00:04:09.389 size: 1.000366 MiB name: RG_ring_5_1730344 00:04:09.389 size: 0.125366 MiB name: RG_ring_2_1730344 00:04:09.389 size: 0.015991 MiB name: RG_ring_3_1730344 00:04:09.389 end memzones------- 00:04:09.389 16:05:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:09.649 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:09.649 list of free elements. size: 10.852478 MiB 00:04:09.649 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:09.649 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:09.649 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:09.649 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:09.649 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:09.649 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:09.649 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:09.649 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:09.649 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:09.649 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:09.649 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:09.649 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:09.649 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:09.649 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:09.649 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:09.649 list of standard malloc elements. size: 199.218628 MiB 00:04:09.649 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:09.649 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:09.649 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:09.649 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:09.649 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:09.649 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:09.649 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:09.649 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:09.649 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:09.649 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:09.649 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:09.649 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:09.649 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:09.649 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:09.649 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:09.649 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:09.649 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:09.649 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:09.649 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:09.649 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:09.649 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:09.649 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:09.649 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:09.649 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:09.649 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:09.649 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:09.649 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:09.649 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:09.649 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:09.649 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:09.649 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:09.649 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:09.649 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:09.649 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:09.649 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:09.649 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:09.649 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:09.649 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:09.649 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:09.649 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:09.649 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:09.649 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:09.649 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:09.649 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:09.649 list of memzone associated elements. size: 607.928894 MiB 00:04:09.649 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:09.650 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:09.650 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:09.650 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:09.650 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:09.650 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1730344_0 00:04:09.650 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:09.650 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1730344_0 00:04:09.650 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:09.650 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1730344_0 00:04:09.650 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:09.650 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:09.650 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:09.650 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:09.650 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:09.650 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1730344_0 00:04:09.650 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:09.650 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1730344 00:04:09.650 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:09.650 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1730344 00:04:09.650 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:09.650 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:09.650 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:09.650 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:09.650 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:09.650 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:09.650 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:09.650 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:09.650 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:09.650 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1730344 00:04:09.650 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:09.650 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1730344 00:04:09.650 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:09.650 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1730344 00:04:09.650 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:09.650 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1730344 00:04:09.650 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:09.650 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1730344 00:04:09.650 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:09.650 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1730344 00:04:09.650 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:09.650 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:09.650 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:09.650 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:09.650 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:09.650 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:09.650 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:09.650 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1730344 00:04:09.650 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:09.650 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1730344 00:04:09.650 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:09.650 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:09.650 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:09.650 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:09.650 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:09.650 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1730344 00:04:09.650 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:09.650 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:09.650 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:09.650 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1730344 00:04:09.650 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:09.650 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1730344 00:04:09.650 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:09.650 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1730344 00:04:09.650 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:09.650 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:09.650 16:05:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:09.650 16:05:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1730344 00:04:09.650 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1730344 ']' 00:04:09.650 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1730344 00:04:09.650 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:09.650 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:09.650 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1730344 00:04:09.650 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:09.650 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:09.650 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1730344' 00:04:09.650 killing process with pid 1730344 00:04:09.650 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1730344 00:04:09.650 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1730344 00:04:09.909 00:04:09.909 real 0m1.513s 00:04:09.909 user 0m1.606s 00:04:09.909 sys 0m0.427s 00:04:09.909 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.909 16:05:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:09.909 ************************************ 00:04:09.909 END TEST dpdk_mem_utility 00:04:09.909 ************************************ 00:04:09.909 16:05:41 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:09.909 16:05:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.909 16:05:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.909 16:05:41 -- common/autotest_common.sh@10 -- # set +x 00:04:09.909 ************************************ 00:04:09.909 START TEST event 00:04:09.909 ************************************ 00:04:09.909 16:05:41 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:10.168 * Looking for test storage... 00:04:10.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:10.168 16:05:41 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:10.168 16:05:41 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:10.168 16:05:41 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:10.168 16:05:41 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:10.168 16:05:41 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.169 16:05:41 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.169 16:05:41 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.169 16:05:41 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.169 16:05:41 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.169 16:05:41 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.169 16:05:41 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.169 16:05:41 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.169 16:05:41 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.169 16:05:41 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.169 16:05:41 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.169 16:05:41 event -- scripts/common.sh@344 -- # case "$op" in 00:04:10.169 16:05:41 event -- scripts/common.sh@345 -- # : 1 00:04:10.169 16:05:41 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.169 16:05:41 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.169 16:05:41 event -- scripts/common.sh@365 -- # decimal 1 00:04:10.169 16:05:41 event -- scripts/common.sh@353 -- # local d=1 00:04:10.169 16:05:41 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.169 16:05:41 event -- scripts/common.sh@355 -- # echo 1 00:04:10.169 16:05:41 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.169 16:05:41 event -- scripts/common.sh@366 -- # decimal 2 00:04:10.169 16:05:41 event -- scripts/common.sh@353 -- # local d=2 00:04:10.169 16:05:41 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.169 16:05:41 event -- scripts/common.sh@355 -- # echo 2 00:04:10.169 16:05:41 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.169 16:05:41 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.169 16:05:41 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.169 16:05:41 event -- scripts/common.sh@368 -- # return 0 00:04:10.169 16:05:41 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.169 16:05:41 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:10.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.169 --rc genhtml_branch_coverage=1 00:04:10.169 --rc genhtml_function_coverage=1 00:04:10.169 --rc genhtml_legend=1 00:04:10.169 --rc geninfo_all_blocks=1 00:04:10.169 --rc geninfo_unexecuted_blocks=1 00:04:10.169 00:04:10.169 ' 00:04:10.169 16:05:41 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:10.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.169 --rc genhtml_branch_coverage=1 00:04:10.169 --rc genhtml_function_coverage=1 00:04:10.169 --rc genhtml_legend=1 00:04:10.169 --rc geninfo_all_blocks=1 00:04:10.169 --rc geninfo_unexecuted_blocks=1 00:04:10.169 00:04:10.169 ' 00:04:10.169 16:05:41 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:10.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.169 --rc genhtml_branch_coverage=1 00:04:10.169 --rc genhtml_function_coverage=1 00:04:10.169 --rc genhtml_legend=1 00:04:10.169 --rc geninfo_all_blocks=1 00:04:10.169 --rc geninfo_unexecuted_blocks=1 00:04:10.169 00:04:10.169 ' 00:04:10.169 16:05:41 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:10.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.169 --rc genhtml_branch_coverage=1 00:04:10.169 --rc genhtml_function_coverage=1 00:04:10.169 --rc genhtml_legend=1 00:04:10.169 --rc geninfo_all_blocks=1 00:04:10.169 --rc geninfo_unexecuted_blocks=1 00:04:10.169 00:04:10.169 ' 00:04:10.169 16:05:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:10.169 16:05:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:10.169 16:05:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:10.169 16:05:41 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:10.169 16:05:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.169 16:05:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.169 ************************************ 00:04:10.169 START TEST event_perf 00:04:10.169 ************************************ 00:04:10.169 16:05:41 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:10.169 Running I/O for 1 seconds...[2024-11-20 16:05:41.290158] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:10.169 [2024-11-20 16:05:41.290237] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730663 ] 00:04:10.169 [2024-11-20 16:05:41.365199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:10.427 [2024-11-20 16:05:41.409504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.427 [2024-11-20 16:05:41.409615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:10.427 [2024-11-20 16:05:41.409721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.427 Running I/O for 1 seconds...[2024-11-20 16:05:41.409722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.364 00:04:11.364 lcore 0: 206595 00:04:11.364 lcore 1: 206595 00:04:11.364 lcore 2: 206596 00:04:11.364 lcore 3: 206596 00:04:11.364 done. 00:04:11.364 00:04:11.364 real 0m1.180s 00:04:11.364 user 0m4.102s 00:04:11.364 sys 0m0.074s 00:04:11.364 16:05:42 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.364 16:05:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:11.364 ************************************ 00:04:11.364 END TEST event_perf 00:04:11.364 ************************************ 00:04:11.364 16:05:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:11.364 16:05:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:11.364 16:05:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.364 16:05:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.364 ************************************ 00:04:11.364 START TEST event_reactor 00:04:11.364 ************************************ 00:04:11.364 16:05:42 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:11.364 [2024-11-20 16:05:42.542962] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:11.364 [2024-11-20 16:05:42.543033] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730914 ] 00:04:11.622 [2024-11-20 16:05:42.622333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.622 [2024-11-20 16:05:42.664549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.560 test_start 00:04:12.560 oneshot 00:04:12.560 tick 100 00:04:12.560 tick 100 00:04:12.560 tick 250 00:04:12.560 tick 100 00:04:12.560 tick 100 00:04:12.560 tick 100 00:04:12.560 tick 250 00:04:12.560 tick 500 00:04:12.560 tick 100 00:04:12.560 tick 100 00:04:12.560 tick 250 00:04:12.560 tick 100 00:04:12.560 tick 100 00:04:12.560 test_end 00:04:12.560 00:04:12.560 real 0m1.180s 00:04:12.560 user 0m1.101s 00:04:12.560 sys 0m0.075s 00:04:12.560 16:05:43 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.560 16:05:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:12.560 ************************************ 00:04:12.560 END TEST event_reactor 00:04:12.560 ************************************ 00:04:12.560 16:05:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:12.560 16:05:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:12.560 16:05:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.560 16:05:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.560 ************************************ 00:04:12.560 START TEST event_reactor_perf 00:04:12.560 ************************************ 00:04:12.560 16:05:43 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:12.820 [2024-11-20 16:05:43.795602] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:12.820 [2024-11-20 16:05:43.795675] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731135 ] 00:04:12.820 [2024-11-20 16:05:43.873156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.820 [2024-11-20 16:05:43.912387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.756 test_start 00:04:13.756 test_end 00:04:13.756 Performance: 519719 events per second 00:04:13.756 00:04:13.756 real 0m1.180s 00:04:13.756 user 0m1.095s 00:04:13.756 sys 0m0.082s 00:04:13.756 16:05:44 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.756 16:05:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:13.756 ************************************ 00:04:13.756 END TEST event_reactor_perf 00:04:13.756 ************************************ 00:04:14.015 16:05:44 event -- event/event.sh@49 -- # uname -s 00:04:14.015 16:05:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:14.015 16:05:44 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:14.015 16:05:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.015 16:05:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.015 16:05:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.015 ************************************ 00:04:14.015 START TEST event_scheduler 00:04:14.015 ************************************ 00:04:14.015 16:05:45 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:14.015 * Looking for test storage... 00:04:14.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:14.015 16:05:45 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.015 16:05:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.015 16:05:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.015 16:05:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.015 16:05:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.016 16:05:45 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:14.016 16:05:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:14.016 16:05:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.016 16:05:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:14.016 16:05:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.016 16:05:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:14.016 16:05:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:14.016 16:05:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.016 16:05:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:14.016 16:05:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.016 16:05:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.016 16:05:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.016 16:05:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:14.016 16:05:45 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.016 16:05:45 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.016 --rc genhtml_branch_coverage=1 00:04:14.016 --rc genhtml_function_coverage=1 00:04:14.016 --rc genhtml_legend=1 00:04:14.016 --rc geninfo_all_blocks=1 00:04:14.016 --rc geninfo_unexecuted_blocks=1 00:04:14.016 00:04:14.016 ' 00:04:14.016 16:05:45 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.016 --rc genhtml_branch_coverage=1 00:04:14.016 --rc genhtml_function_coverage=1 00:04:14.016 --rc genhtml_legend=1 00:04:14.016 --rc geninfo_all_blocks=1 00:04:14.016 --rc geninfo_unexecuted_blocks=1 00:04:14.016 00:04:14.016 ' 00:04:14.016 16:05:45 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.016 --rc genhtml_branch_coverage=1 00:04:14.016 --rc genhtml_function_coverage=1 00:04:14.016 --rc genhtml_legend=1 00:04:14.016 --rc geninfo_all_blocks=1 00:04:14.016 --rc geninfo_unexecuted_blocks=1 00:04:14.016 00:04:14.016 ' 00:04:14.016 16:05:45 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.016 --rc genhtml_branch_coverage=1 00:04:14.016 --rc genhtml_function_coverage=1 00:04:14.016 --rc genhtml_legend=1 00:04:14.016 --rc geninfo_all_blocks=1 00:04:14.016 --rc geninfo_unexecuted_blocks=1 00:04:14.016 00:04:14.016 ' 00:04:14.016 16:05:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:14.016 16:05:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1731449 00:04:14.016 16:05:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:14.016 16:05:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.016 16:05:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1731449 00:04:14.016 16:05:45 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1731449 ']' 00:04:14.016 16:05:45 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.016 16:05:45 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.016 16:05:45 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.016 16:05:45 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.016 16:05:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:14.274 [2024-11-20 16:05:45.256748] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:14.274 [2024-11-20 16:05:45.256798] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731449 ] 00:04:14.275 [2024-11-20 16:05:45.317441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:14.275 [2024-11-20 16:05:45.362273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.275 [2024-11-20 16:05:45.362850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.275 [2024-11-20 16:05:45.362937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:14.275 [2024-11-20 16:05:45.362937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:14.275 16:05:45 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.275 16:05:45 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:14.275 16:05:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:14.275 16:05:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.275 16:05:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:14.275 [2024-11-20 16:05:45.423457] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:14.275 [2024-11-20 16:05:45.423474] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:14.275 [2024-11-20 16:05:45.423483] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:14.275 [2024-11-20 16:05:45.423489] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:14.275 [2024-11-20 16:05:45.423494] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:14.275 16:05:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.275 16:05:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:14.275 16:05:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.275 16:05:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:14.275 [2024-11-20 16:05:45.497913] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:14.275 16:05:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.275 16:05:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:14.275 16:05:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.275 16:05:45 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.275 16:05:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:14.533 ************************************ 00:04:14.533 START TEST scheduler_create_thread 00:04:14.533 ************************************ 00:04:14.533 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.534 2 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.534 3 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.534 4 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.534 5 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.534 6 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.534 7 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.534 8 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.534 9 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.534 10 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.534 16:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.468 16:05:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.468 16:05:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:15.468 16:05:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.468 16:05:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.842 16:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.842 16:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:16.842 16:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:16.842 16:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.842 16:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.778 16:05:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.778 00:04:17.778 real 0m3.382s 00:04:17.778 user 0m0.023s 00:04:17.778 sys 0m0.006s 00:04:17.778 16:05:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.778 16:05:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.778 ************************************ 00:04:17.778 END TEST scheduler_create_thread 00:04:17.778 ************************************ 00:04:17.778 16:05:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:17.778 16:05:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1731449 00:04:17.778 16:05:48 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1731449 ']' 00:04:17.778 16:05:48 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1731449 00:04:17.778 16:05:48 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:17.778 16:05:48 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.778 16:05:48 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1731449 00:04:17.778 16:05:49 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:17.778 16:05:49 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:17.778 16:05:49 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1731449' 00:04:17.778 killing process with pid 1731449 00:04:17.778 16:05:49 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1731449 00:04:17.778 16:05:49 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1731449 00:04:18.435 [2024-11-20 16:05:49.297985] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:18.435 00:04:18.435 real 0m4.472s 00:04:18.435 user 0m7.862s 00:04:18.435 sys 0m0.384s 00:04:18.435 16:05:49 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.435 16:05:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:18.435 ************************************ 00:04:18.435 END TEST event_scheduler 00:04:18.435 ************************************ 00:04:18.435 16:05:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:18.435 16:05:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:18.435 16:05:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.435 16:05:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.435 16:05:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.435 ************************************ 00:04:18.435 START TEST app_repeat 00:04:18.435 ************************************ 00:04:18.435 16:05:49 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1732198 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1732198' 00:04:18.435 Process app_repeat pid: 1732198 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:18.435 spdk_app_start Round 0 00:04:18.435 16:05:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1732198 /var/tmp/spdk-nbd.sock 00:04:18.435 16:05:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1732198 ']' 00:04:18.435 16:05:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.435 16:05:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.435 16:05:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.435 16:05:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.435 16:05:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.435 [2024-11-20 16:05:49.619502] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:18.435 [2024-11-20 16:05:49.619555] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1732198 ] 00:04:18.698 [2024-11-20 16:05:49.695719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.698 [2024-11-20 16:05:49.743221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.698 [2024-11-20 16:05:49.743224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.698 16:05:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.698 16:05:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:18.698 16:05:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.956 Malloc0 00:04:18.956 16:05:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.215 Malloc1 00:04:19.215 16:05:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.215 16:05:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:19.474 /dev/nbd0 00:04:19.474 16:05:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:19.474 16:05:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.474 1+0 records in 00:04:19.474 1+0 records out 00:04:19.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197642 s, 20.7 MB/s 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:19.474 16:05:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:19.474 16:05:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.474 16:05:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.474 16:05:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:19.732 /dev/nbd1 00:04:19.732 16:05:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:19.732 16:05:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.732 1+0 records in 00:04:19.732 1+0 records out 00:04:19.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259899 s, 15.8 MB/s 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:19.732 16:05:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:19.732 16:05:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.732 16:05:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.732 16:05:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.732 16:05:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.732 16:05:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.991 16:05:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:19.991 { 00:04:19.991 "nbd_device": "/dev/nbd0", 00:04:19.991 "bdev_name": "Malloc0" 00:04:19.991 }, 00:04:19.991 { 00:04:19.991 "nbd_device": "/dev/nbd1", 00:04:19.991 "bdev_name": "Malloc1" 00:04:19.991 } 00:04:19.991 ]' 00:04:19.991 16:05:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.991 16:05:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:19.991 { 00:04:19.991 "nbd_device": "/dev/nbd0", 00:04:19.991 "bdev_name": "Malloc0" 00:04:19.991 }, 00:04:19.991 { 00:04:19.991 "nbd_device": "/dev/nbd1", 00:04:19.991 "bdev_name": "Malloc1" 00:04:19.991 } 00:04:19.991 ]' 00:04:19.991 16:05:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:19.991 /dev/nbd1' 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:19.991 /dev/nbd1' 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:19.991 256+0 records in 00:04:19.991 256+0 records out 00:04:19.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010647 s, 98.5 MB/s 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:19.991 256+0 records in 00:04:19.991 256+0 records out 00:04:19.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013632 s, 76.9 MB/s 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:19.991 256+0 records in 00:04:19.991 256+0 records out 00:04:19.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148612 s, 70.6 MB/s 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.991 16:05:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.992 16:05:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:20.251 16:05:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:20.251 16:05:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:20.251 16:05:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:20.251 16:05:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.251 16:05:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.251 16:05:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:20.251 16:05:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.251 16:05:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.251 16:05:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.251 16:05:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:20.510 16:05:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.769 16:05:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:20.769 16:05:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:20.769 16:05:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:20.769 16:05:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:20.769 16:05:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:20.769 16:05:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:20.769 16:05:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:20.769 16:05:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:21.027 [2024-11-20 16:05:52.111418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.027 [2024-11-20 16:05:52.148509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.027 [2024-11-20 16:05:52.148509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.027 [2024-11-20 16:05:52.188903] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:21.027 [2024-11-20 16:05:52.188946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:24.312 16:05:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:24.312 16:05:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:24.312 spdk_app_start Round 1 00:04:24.312 16:05:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1732198 /var/tmp/spdk-nbd.sock 00:04:24.312 16:05:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1732198 ']' 00:04:24.312 16:05:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.312 16:05:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.312 16:05:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.312 16:05:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.312 16:05:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.312 16:05:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.312 16:05:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:24.312 16:05:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.312 Malloc0 00:04:24.312 16:05:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.312 Malloc1 00:04:24.572 16:05:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:24.572 /dev/nbd0 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.572 1+0 records in 00:04:24.572 1+0 records out 00:04:24.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202504 s, 20.2 MB/s 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:24.572 16:05:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.572 16:05:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:24.831 /dev/nbd1 00:04:24.831 16:05:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:24.831 16:05:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.831 1+0 records in 00:04:24.831 1+0 records out 00:04:24.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194288 s, 21.1 MB/s 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:24.831 16:05:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:24.831 16:05:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.831 16:05:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.831 16:05:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.831 16:05:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.831 16:05:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.090 { 00:04:25.090 "nbd_device": "/dev/nbd0", 00:04:25.090 "bdev_name": "Malloc0" 00:04:25.090 }, 00:04:25.090 { 00:04:25.090 "nbd_device": "/dev/nbd1", 00:04:25.090 "bdev_name": "Malloc1" 00:04:25.090 } 00:04:25.090 ]' 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.090 { 00:04:25.090 "nbd_device": "/dev/nbd0", 00:04:25.090 "bdev_name": "Malloc0" 00:04:25.090 }, 00:04:25.090 { 00:04:25.090 "nbd_device": "/dev/nbd1", 00:04:25.090 "bdev_name": "Malloc1" 00:04:25.090 } 00:04:25.090 ]' 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:25.090 /dev/nbd1' 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:25.090 /dev/nbd1' 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:25.090 256+0 records in 00:04:25.090 256+0 records out 00:04:25.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102558 s, 102 MB/s 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.090 16:05:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:25.349 256+0 records in 00:04:25.349 256+0 records out 00:04:25.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145347 s, 72.1 MB/s 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:25.349 256+0 records in 00:04:25.349 256+0 records out 00:04:25.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145897 s, 71.9 MB/s 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.349 16:05:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.608 16:05:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.867 16:05:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:25.867 16:05:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:25.867 16:05:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.867 16:05:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:25.867 16:05:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:25.867 16:05:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.867 16:05:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:25.867 16:05:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:25.867 16:05:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:25.867 16:05:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:25.867 16:05:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:25.867 16:05:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:25.867 16:05:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.126 16:05:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:26.385 [2024-11-20 16:05:57.402657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.385 [2024-11-20 16:05:57.439795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.385 [2024-11-20 16:05:57.439795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.385 [2024-11-20 16:05:57.481165] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.385 [2024-11-20 16:05:57.481209] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:29.669 16:06:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:29.669 16:06:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:29.669 spdk_app_start Round 2 00:04:29.670 16:06:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1732198 /var/tmp/spdk-nbd.sock 00:04:29.670 16:06:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1732198 ']' 00:04:29.670 16:06:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.670 16:06:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.670 16:06:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.670 16:06:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.670 16:06:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.670 16:06:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.670 16:06:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:29.670 16:06:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.670 Malloc0 00:04:29.670 16:06:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.670 Malloc1 00:04:29.670 16:06:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.670 16:06:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.929 /dev/nbd0 00:04:29.929 16:06:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.929 16:06:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.929 1+0 records in 00:04:29.929 1+0 records out 00:04:29.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194333 s, 21.1 MB/s 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:29.929 16:06:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:29.929 16:06:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.929 16:06:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.929 16:06:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.188 /dev/nbd1 00:04:30.188 16:06:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.188 16:06:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.188 1+0 records in 00:04:30.188 1+0 records out 00:04:30.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00733413 s, 558 kB/s 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:30.188 16:06:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:30.188 16:06:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.188 16:06:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.188 16:06:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.189 16:06:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.189 16:06:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.448 { 00:04:30.448 "nbd_device": "/dev/nbd0", 00:04:30.448 "bdev_name": "Malloc0" 00:04:30.448 }, 00:04:30.448 { 00:04:30.448 "nbd_device": "/dev/nbd1", 00:04:30.448 "bdev_name": "Malloc1" 00:04:30.448 } 00:04:30.448 ]' 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.448 { 00:04:30.448 "nbd_device": "/dev/nbd0", 00:04:30.448 "bdev_name": "Malloc0" 00:04:30.448 }, 00:04:30.448 { 00:04:30.448 "nbd_device": "/dev/nbd1", 00:04:30.448 "bdev_name": "Malloc1" 00:04:30.448 } 00:04:30.448 ]' 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.448 /dev/nbd1' 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.448 /dev/nbd1' 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.448 256+0 records in 00:04:30.448 256+0 records out 00:04:30.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103374 s, 101 MB/s 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.448 16:06:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.707 256+0 records in 00:04:30.707 256+0 records out 00:04:30.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140953 s, 74.4 MB/s 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.707 256+0 records in 00:04:30.707 256+0 records out 00:04:30.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146082 s, 71.8 MB/s 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.707 16:06:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.966 16:06:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.966 16:06:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.966 16:06:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.966 16:06:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.966 16:06:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.966 16:06:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.966 16:06:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.966 16:06:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.966 16:06:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:30.966 16:06:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:30.966 16:06:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:30.966 16:06:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:30.966 16:06:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.966 16:06:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.966 16:06:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:30.966 16:06:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.966 16:06:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.966 16:06:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.966 16:06:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.966 16:06:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.226 16:06:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.226 16:06:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.226 16:06:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.226 16:06:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.226 16:06:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.226 16:06:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.226 16:06:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:31.226 16:06:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.226 16:06:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.226 16:06:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.226 16:06:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.226 16:06:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.226 16:06:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.485 16:06:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:31.743 [2024-11-20 16:06:02.749967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.743 [2024-11-20 16:06:02.787369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.743 [2024-11-20 16:06:02.787370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.743 [2024-11-20 16:06:02.828514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:31.743 [2024-11-20 16:06:02.828557] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:35.031 16:06:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1732198 /var/tmp/spdk-nbd.sock 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1732198 ']' 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:35.031 16:06:05 event.app_repeat -- event/event.sh@39 -- # killprocess 1732198 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1732198 ']' 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1732198 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1732198 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1732198' 00:04:35.031 killing process with pid 1732198 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1732198 00:04:35.031 16:06:05 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1732198 00:04:35.031 spdk_app_start is called in Round 0. 00:04:35.031 Shutdown signal received, stop current app iteration 00:04:35.031 Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 reinitialization... 00:04:35.031 spdk_app_start is called in Round 1. 00:04:35.031 Shutdown signal received, stop current app iteration 00:04:35.031 Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 reinitialization... 00:04:35.031 spdk_app_start is called in Round 2. 00:04:35.031 Shutdown signal received, stop current app iteration 00:04:35.031 Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 reinitialization... 00:04:35.031 spdk_app_start is called in Round 3. 00:04:35.031 Shutdown signal received, stop current app iteration 00:04:35.031 16:06:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:35.031 16:06:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:35.031 00:04:35.031 real 0m16.427s 00:04:35.031 user 0m36.129s 00:04:35.031 sys 0m2.471s 00:04:35.031 16:06:06 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.031 16:06:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.031 ************************************ 00:04:35.031 END TEST app_repeat 00:04:35.031 ************************************ 00:04:35.031 16:06:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:35.031 16:06:06 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:35.031 16:06:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.031 16:06:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.031 16:06:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.031 ************************************ 00:04:35.031 START TEST cpu_locks 00:04:35.031 ************************************ 00:04:35.031 16:06:06 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:35.031 * Looking for test storage... 00:04:35.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:35.031 16:06:06 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.031 16:06:06 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.031 16:06:06 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.031 16:06:06 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.031 16:06:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:35.032 16:06:06 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.032 16:06:06 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.032 16:06:06 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.032 16:06:06 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:35.032 16:06:06 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.032 16:06:06 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.032 --rc genhtml_branch_coverage=1 00:04:35.032 --rc genhtml_function_coverage=1 00:04:35.032 --rc genhtml_legend=1 00:04:35.032 --rc geninfo_all_blocks=1 00:04:35.032 --rc geninfo_unexecuted_blocks=1 00:04:35.032 00:04:35.032 ' 00:04:35.032 16:06:06 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.032 --rc genhtml_branch_coverage=1 00:04:35.032 --rc genhtml_function_coverage=1 00:04:35.032 --rc genhtml_legend=1 00:04:35.032 --rc geninfo_all_blocks=1 00:04:35.032 --rc geninfo_unexecuted_blocks=1 00:04:35.032 00:04:35.032 ' 00:04:35.032 16:06:06 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.032 --rc genhtml_branch_coverage=1 00:04:35.032 --rc genhtml_function_coverage=1 00:04:35.032 --rc genhtml_legend=1 00:04:35.032 --rc geninfo_all_blocks=1 00:04:35.032 --rc geninfo_unexecuted_blocks=1 00:04:35.032 00:04:35.032 ' 00:04:35.032 16:06:06 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.032 --rc genhtml_branch_coverage=1 00:04:35.032 --rc genhtml_function_coverage=1 00:04:35.032 --rc genhtml_legend=1 00:04:35.032 --rc geninfo_all_blocks=1 00:04:35.032 --rc geninfo_unexecuted_blocks=1 00:04:35.032 00:04:35.032 ' 00:04:35.032 16:06:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:35.032 16:06:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:35.032 16:06:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:35.032 16:06:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:35.032 16:06:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.032 16:06:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.032 16:06:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.291 ************************************ 00:04:35.291 START TEST default_locks 00:04:35.291 ************************************ 00:04:35.291 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:35.291 16:06:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1735658 00:04:35.291 16:06:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1735658 00:04:35.291 16:06:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.291 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1735658 ']' 00:04:35.291 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.291 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.291 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.291 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.291 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.291 [2024-11-20 16:06:06.339681] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:35.291 [2024-11-20 16:06:06.339726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735658 ] 00:04:35.291 [2024-11-20 16:06:06.414878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.291 [2024-11-20 16:06:06.456563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.550 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.550 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:35.550 16:06:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1735658 00:04:35.550 16:06:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1735658 00:04:35.550 16:06:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:35.809 lslocks: write error 00:04:35.809 16:06:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1735658 00:04:35.809 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1735658 ']' 00:04:35.809 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1735658 00:04:35.809 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:35.809 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.809 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1735658 00:04:35.809 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.809 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.809 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1735658' 00:04:35.809 killing process with pid 1735658 00:04:35.809 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1735658 00:04:35.809 16:06:06 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1735658 00:04:36.068 16:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1735658 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1735658 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1735658 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1735658 ']' 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1735658) - No such process 00:04:36.069 ERROR: process (pid: 1735658) is no longer running 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:36.069 00:04:36.069 real 0m1.007s 00:04:36.069 user 0m0.950s 00:04:36.069 sys 0m0.464s 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.069 16:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.069 ************************************ 00:04:36.069 END TEST default_locks 00:04:36.069 ************************************ 00:04:36.328 16:06:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:36.328 16:06:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.328 16:06:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.328 16:06:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.328 ************************************ 00:04:36.328 START TEST default_locks_via_rpc 00:04:36.328 ************************************ 00:04:36.328 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:36.328 16:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1735966 00:04:36.328 16:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1735966 00:04:36.328 16:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.328 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1735966 ']' 00:04:36.328 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.328 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.328 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.328 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.328 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.328 [2024-11-20 16:06:07.412395] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:36.328 [2024-11-20 16:06:07.412435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735966 ] 00:04:36.328 [2024-11-20 16:06:07.488161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.328 [2024-11-20 16:06:07.529969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1735966 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1735966 00:04:36.588 16:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.155 16:06:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1735966 00:04:37.155 16:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1735966 ']' 00:04:37.155 16:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1735966 00:04:37.155 16:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:37.155 16:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.155 16:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1735966 00:04:37.155 16:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.155 16:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.155 16:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1735966' 00:04:37.155 killing process with pid 1735966 00:04:37.155 16:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1735966 00:04:37.155 16:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1735966 00:04:37.414 00:04:37.414 real 0m1.180s 00:04:37.414 user 0m1.138s 00:04:37.414 sys 0m0.537s 00:04:37.414 16:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.414 16:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.414 ************************************ 00:04:37.414 END TEST default_locks_via_rpc 00:04:37.414 ************************************ 00:04:37.414 16:06:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:37.414 16:06:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.414 16:06:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.414 16:06:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.414 ************************************ 00:04:37.414 START TEST non_locking_app_on_locked_coremask 00:04:37.414 ************************************ 00:04:37.414 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:37.414 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1736212 00:04:37.414 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1736212 /var/tmp/spdk.sock 00:04:37.414 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.414 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1736212 ']' 00:04:37.414 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.414 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.414 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.414 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.414 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.673 [2024-11-20 16:06:08.665137] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:37.673 [2024-11-20 16:06:08.665178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736212 ] 00:04:37.673 [2024-11-20 16:06:08.739641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.673 [2024-11-20 16:06:08.781398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.933 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.933 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:37.933 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1736234 00:04:37.933 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1736234 /var/tmp/spdk2.sock 00:04:37.933 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:37.933 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1736234 ']' 00:04:37.933 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:37.933 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.933 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:37.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:37.933 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.933 16:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.933 [2024-11-20 16:06:09.048351] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:37.933 [2024-11-20 16:06:09.048395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736234 ] 00:04:37.933 [2024-11-20 16:06:09.132297] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:37.933 [2024-11-20 16:06:09.132319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.192 [2024-11-20 16:06:09.213346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.759 16:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.759 16:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:38.759 16:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1736212 00:04:38.759 16:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1736212 00:04:38.759 16:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.326 lslocks: write error 00:04:39.326 16:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1736212 00:04:39.326 16:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1736212 ']' 00:04:39.326 16:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1736212 00:04:39.326 16:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.326 16:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.326 16:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1736212 00:04:39.326 16:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.326 16:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.326 16:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1736212' 00:04:39.326 killing process with pid 1736212 00:04:39.326 16:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1736212 00:04:39.326 16:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1736212 00:04:39.894 16:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1736234 00:04:39.894 16:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1736234 ']' 00:04:39.894 16:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1736234 00:04:39.894 16:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.894 16:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.894 16:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1736234 00:04:39.894 16:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.894 16:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.894 16:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1736234' 00:04:39.894 killing process with pid 1736234 00:04:39.894 16:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1736234 00:04:39.894 16:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1736234 00:04:40.153 00:04:40.153 real 0m2.757s 00:04:40.153 user 0m2.905s 00:04:40.153 sys 0m0.908s 00:04:40.153 16:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.153 16:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.153 ************************************ 00:04:40.153 END TEST non_locking_app_on_locked_coremask 00:04:40.153 ************************************ 00:04:40.412 16:06:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:40.412 16:06:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.412 16:06:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.412 16:06:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.412 ************************************ 00:04:40.412 START TEST locking_app_on_unlocked_coremask 00:04:40.412 ************************************ 00:04:40.412 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:40.412 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1736720 00:04:40.412 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1736720 /var/tmp/spdk.sock 00:04:40.412 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:40.412 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1736720 ']' 00:04:40.412 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.412 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.412 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.412 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.412 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.412 [2024-11-20 16:06:11.493980] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:40.412 [2024-11-20 16:06:11.494029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736720 ] 00:04:40.412 [2024-11-20 16:06:11.566341] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.412 [2024-11-20 16:06:11.566365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.412 [2024-11-20 16:06:11.604417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.671 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.671 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:40.671 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1736728 00:04:40.671 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1736728 /var/tmp/spdk2.sock 00:04:40.671 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:40.671 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1736728 ']' 00:04:40.671 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.671 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.671 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.671 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.671 16:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.671 [2024-11-20 16:06:11.877758] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:40.671 [2024-11-20 16:06:11.877807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736728 ] 00:04:40.929 [2024-11-20 16:06:11.970231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.929 [2024-11-20 16:06:12.052433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.496 16:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.496 16:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:41.496 16:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1736728 00:04:41.496 16:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1736728 00:04:41.496 16:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.065 lslocks: write error 00:04:42.065 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1736720 00:04:42.065 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1736720 ']' 00:04:42.065 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1736720 00:04:42.065 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:42.065 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.065 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1736720 00:04:42.065 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.065 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.065 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1736720' 00:04:42.065 killing process with pid 1736720 00:04:42.065 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1736720 00:04:42.065 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1736720 00:04:42.634 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1736728 00:04:42.634 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1736728 ']' 00:04:42.634 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1736728 00:04:42.634 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:42.634 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.634 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1736728 00:04:42.634 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.634 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.634 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1736728' 00:04:42.634 killing process with pid 1736728 00:04:42.634 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1736728 00:04:42.634 16:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1736728 00:04:42.893 00:04:42.893 real 0m2.664s 00:04:42.893 user 0m2.818s 00:04:42.893 sys 0m0.881s 00:04:42.893 16:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.894 16:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.894 ************************************ 00:04:42.894 END TEST locking_app_on_unlocked_coremask 00:04:42.894 ************************************ 00:04:43.152 16:06:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:43.153 16:06:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.153 16:06:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.153 16:06:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.153 ************************************ 00:04:43.153 START TEST locking_app_on_locked_coremask 00:04:43.153 ************************************ 00:04:43.153 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:43.153 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.153 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1737220 00:04:43.153 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1737220 /var/tmp/spdk.sock 00:04:43.153 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1737220 ']' 00:04:43.153 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.153 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.153 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.153 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.153 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.153 [2024-11-20 16:06:14.221062] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:43.153 [2024-11-20 16:06:14.221103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737220 ] 00:04:43.153 [2024-11-20 16:06:14.295328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.153 [2024-11-20 16:06:14.336936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1737230 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1737230 /var/tmp/spdk2.sock 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1737230 /var/tmp/spdk2.sock 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1737230 /var/tmp/spdk2.sock 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1737230 ']' 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.411 16:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.411 [2024-11-20 16:06:14.608402] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:43.411 [2024-11-20 16:06:14.608452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737230 ] 00:04:43.670 [2024-11-20 16:06:14.690949] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1737220 has claimed it. 00:04:43.670 [2024-11-20 16:06:14.690978] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:44.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1737230) - No such process 00:04:44.237 ERROR: process (pid: 1737230) is no longer running 00:04:44.237 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.237 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:44.237 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:44.237 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.237 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:44.237 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.237 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1737220 00:04:44.237 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1737220 00:04:44.237 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.805 lslocks: write error 00:04:44.805 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1737220 00:04:44.805 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1737220 ']' 00:04:44.805 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1737220 00:04:44.805 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:44.805 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.805 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737220 00:04:44.805 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.805 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.805 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737220' 00:04:44.805 killing process with pid 1737220 00:04:44.805 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1737220 00:04:44.805 16:06:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1737220 00:04:45.064 00:04:45.064 real 0m1.906s 00:04:45.064 user 0m2.038s 00:04:45.064 sys 0m0.650s 00:04:45.064 16:06:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.064 16:06:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.064 ************************************ 00:04:45.064 END TEST locking_app_on_locked_coremask 00:04:45.064 ************************************ 00:04:45.064 16:06:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:45.064 16:06:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.064 16:06:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.064 16:06:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.064 ************************************ 00:04:45.064 START TEST locking_overlapped_coremask 00:04:45.064 ************************************ 00:04:45.064 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:45.064 16:06:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1737495 00:04:45.064 16:06:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1737495 /var/tmp/spdk.sock 00:04:45.065 16:06:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:45.065 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1737495 ']' 00:04:45.065 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.065 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.065 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.065 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.065 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.065 [2024-11-20 16:06:16.199879] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:45.065 [2024-11-20 16:06:16.199918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737495 ] 00:04:45.065 [2024-11-20 16:06:16.272739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:45.323 [2024-11-20 16:06:16.317867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.323 [2024-11-20 16:06:16.317971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.323 [2024-11-20 16:06:16.317973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1737500 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1737500 /var/tmp/spdk2.sock 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1737500 /var/tmp/spdk2.sock 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1737500 /var/tmp/spdk2.sock 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1737500 ']' 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.323 16:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.581 [2024-11-20 16:06:16.575228] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:45.581 [2024-11-20 16:06:16.575279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737500 ] 00:04:45.581 [2024-11-20 16:06:16.668728] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1737495 has claimed it. 00:04:45.581 [2024-11-20 16:06:16.668765] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:46.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1737500) - No such process 00:04:46.147 ERROR: process (pid: 1737500) is no longer running 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1737495 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1737495 ']' 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1737495 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737495 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737495' 00:04:46.147 killing process with pid 1737495 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1737495 00:04:46.147 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1737495 00:04:46.406 00:04:46.406 real 0m1.421s 00:04:46.406 user 0m3.909s 00:04:46.406 sys 0m0.396s 00:04:46.406 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.406 16:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.406 ************************************ 00:04:46.406 END TEST locking_overlapped_coremask 00:04:46.406 ************************************ 00:04:46.406 16:06:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:46.406 16:06:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.406 16:06:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.406 16:06:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.664 ************************************ 00:04:46.664 START TEST locking_overlapped_coremask_via_rpc 00:04:46.664 ************************************ 00:04:46.664 16:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:46.664 16:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1737758 00:04:46.664 16:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1737758 /var/tmp/spdk.sock 00:04:46.664 16:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:46.664 16:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1737758 ']' 00:04:46.664 16:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.664 16:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.664 16:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.664 16:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.664 16:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.664 [2024-11-20 16:06:17.692017] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:46.664 [2024-11-20 16:06:17.692062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737758 ] 00:04:46.664 [2024-11-20 16:06:17.765588] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.664 [2024-11-20 16:06:17.765610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.664 [2024-11-20 16:06:17.809079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.664 [2024-11-20 16:06:17.810216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.664 [2024-11-20 16:06:17.810219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.596 16:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.596 16:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:47.596 16:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1737990 00:04:47.596 16:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1737990 /var/tmp/spdk2.sock 00:04:47.596 16:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:47.596 16:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1737990 ']' 00:04:47.596 16:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.596 16:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.596 16:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.596 16:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.596 16:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.596 [2024-11-20 16:06:18.569162] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:47.596 [2024-11-20 16:06:18.569218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737990 ] 00:04:47.596 [2024-11-20 16:06:18.661431] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.596 [2024-11-20 16:06:18.661463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:47.596 [2024-11-20 16:06:18.748643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.596 [2024-11-20 16:06:18.748676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.596 [2024-11-20 16:06:18.748677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.528 [2024-11-20 16:06:19.417282] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1737758 has claimed it. 00:04:48.528 request: 00:04:48.528 { 00:04:48.528 "method": "framework_enable_cpumask_locks", 00:04:48.528 "req_id": 1 00:04:48.528 } 00:04:48.528 Got JSON-RPC error response 00:04:48.528 response: 00:04:48.528 { 00:04:48.528 "code": -32603, 00:04:48.528 "message": "Failed to claim CPU core: 2" 00:04:48.528 } 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1737758 /var/tmp/spdk.sock 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1737758 ']' 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1737990 /var/tmp/spdk2.sock 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1737990 ']' 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.528 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.786 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.786 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.786 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:48.786 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:48.786 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:48.786 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:48.786 00:04:48.786 real 0m2.212s 00:04:48.786 user 0m0.947s 00:04:48.786 sys 0m0.186s 00:04:48.786 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.786 16:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.786 ************************************ 00:04:48.786 END TEST locking_overlapped_coremask_via_rpc 00:04:48.786 ************************************ 00:04:48.786 16:06:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:48.786 16:06:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1737758 ]] 00:04:48.786 16:06:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1737758 00:04:48.786 16:06:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1737758 ']' 00:04:48.786 16:06:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1737758 00:04:48.786 16:06:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:48.786 16:06:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.786 16:06:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737758 00:04:48.786 16:06:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.786 16:06:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.786 16:06:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737758' 00:04:48.786 killing process with pid 1737758 00:04:48.786 16:06:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1737758 00:04:48.786 16:06:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1737758 00:04:49.043 16:06:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1737990 ]] 00:04:49.043 16:06:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1737990 00:04:49.043 16:06:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1737990 ']' 00:04:49.043 16:06:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1737990 00:04:49.043 16:06:20 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:49.043 16:06:20 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.043 16:06:20 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737990 00:04:49.300 16:06:20 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:49.300 16:06:20 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:49.300 16:06:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737990' 00:04:49.300 killing process with pid 1737990 00:04:49.300 16:06:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1737990 00:04:49.300 16:06:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1737990 00:04:49.559 16:06:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.559 16:06:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:49.559 16:06:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1737758 ]] 00:04:49.559 16:06:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1737758 00:04:49.559 16:06:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1737758 ']' 00:04:49.559 16:06:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1737758 00:04:49.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1737758) - No such process 00:04:49.559 16:06:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1737758 is not found' 00:04:49.559 Process with pid 1737758 is not found 00:04:49.559 16:06:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1737990 ]] 00:04:49.559 16:06:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1737990 00:04:49.559 16:06:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1737990 ']' 00:04:49.559 16:06:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1737990 00:04:49.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1737990) - No such process 00:04:49.560 16:06:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1737990 is not found' 00:04:49.560 Process with pid 1737990 is not found 00:04:49.560 16:06:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.560 00:04:49.560 real 0m14.539s 00:04:49.560 user 0m25.964s 00:04:49.560 sys 0m4.991s 00:04:49.560 16:06:20 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.560 16:06:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.560 ************************************ 00:04:49.560 END TEST cpu_locks 00:04:49.560 ************************************ 00:04:49.560 00:04:49.560 real 0m39.589s 00:04:49.560 user 1m16.534s 00:04:49.560 sys 0m8.447s 00:04:49.560 16:06:20 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.560 16:06:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.560 ************************************ 00:04:49.560 END TEST event 00:04:49.560 ************************************ 00:04:49.560 16:06:20 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:49.560 16:06:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.560 16:06:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.560 16:06:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.560 ************************************ 00:04:49.560 START TEST thread 00:04:49.560 ************************************ 00:04:49.560 16:06:20 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:49.819 * Looking for test storage... 00:04:49.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:49.819 16:06:20 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.819 16:06:20 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.819 16:06:20 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.819 16:06:20 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.819 16:06:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.819 16:06:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.819 16:06:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.819 16:06:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.820 16:06:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.820 16:06:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.820 16:06:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.820 16:06:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.820 16:06:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.820 16:06:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.820 16:06:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.820 16:06:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:49.820 16:06:20 thread -- scripts/common.sh@345 -- # : 1 00:04:49.820 16:06:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.820 16:06:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.820 16:06:20 thread -- scripts/common.sh@365 -- # decimal 1 00:04:49.820 16:06:20 thread -- scripts/common.sh@353 -- # local d=1 00:04:49.820 16:06:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.820 16:06:20 thread -- scripts/common.sh@355 -- # echo 1 00:04:49.820 16:06:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.820 16:06:20 thread -- scripts/common.sh@366 -- # decimal 2 00:04:49.820 16:06:20 thread -- scripts/common.sh@353 -- # local d=2 00:04:49.820 16:06:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.820 16:06:20 thread -- scripts/common.sh@355 -- # echo 2 00:04:49.820 16:06:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.820 16:06:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.820 16:06:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.820 16:06:20 thread -- scripts/common.sh@368 -- # return 0 00:04:49.820 16:06:20 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.820 16:06:20 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.820 --rc genhtml_branch_coverage=1 00:04:49.820 --rc genhtml_function_coverage=1 00:04:49.820 --rc genhtml_legend=1 00:04:49.820 --rc geninfo_all_blocks=1 00:04:49.820 --rc geninfo_unexecuted_blocks=1 00:04:49.820 00:04:49.820 ' 00:04:49.820 16:06:20 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.820 --rc genhtml_branch_coverage=1 00:04:49.820 --rc genhtml_function_coverage=1 00:04:49.820 --rc genhtml_legend=1 00:04:49.820 --rc geninfo_all_blocks=1 00:04:49.820 --rc geninfo_unexecuted_blocks=1 00:04:49.820 00:04:49.820 ' 00:04:49.820 16:06:20 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.820 --rc genhtml_branch_coverage=1 00:04:49.820 --rc genhtml_function_coverage=1 00:04:49.820 --rc genhtml_legend=1 00:04:49.820 --rc geninfo_all_blocks=1 00:04:49.820 --rc geninfo_unexecuted_blocks=1 00:04:49.820 00:04:49.820 ' 00:04:49.820 16:06:20 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.820 --rc genhtml_branch_coverage=1 00:04:49.820 --rc genhtml_function_coverage=1 00:04:49.820 --rc genhtml_legend=1 00:04:49.820 --rc geninfo_all_blocks=1 00:04:49.820 --rc geninfo_unexecuted_blocks=1 00:04:49.820 00:04:49.820 ' 00:04:49.820 16:06:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.820 16:06:20 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:49.820 16:06:20 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.820 16:06:20 thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.820 ************************************ 00:04:49.820 START TEST thread_poller_perf 00:04:49.820 ************************************ 00:04:49.820 16:06:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.820 [2024-11-20 16:06:20.960213] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:49.820 [2024-11-20 16:06:20.960270] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738452 ] 00:04:49.820 [2024-11-20 16:06:21.039130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.079 [2024-11-20 16:06:21.080362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.079 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:51.017 [2024-11-20T15:06:22.251Z] ====================================== 00:04:51.017 [2024-11-20T15:06:22.251Z] busy:2105425306 (cyc) 00:04:51.017 [2024-11-20T15:06:22.251Z] total_run_count: 423000 00:04:51.017 [2024-11-20T15:06:22.251Z] tsc_hz: 2100000000 (cyc) 00:04:51.017 [2024-11-20T15:06:22.251Z] ====================================== 00:04:51.017 [2024-11-20T15:06:22.251Z] poller_cost: 4977 (cyc), 2370 (nsec) 00:04:51.017 00:04:51.017 real 0m1.184s 00:04:51.017 user 0m1.107s 00:04:51.017 sys 0m0.072s 00:04:51.017 16:06:22 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.017 16:06:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.017 ************************************ 00:04:51.017 END TEST thread_poller_perf 00:04:51.017 ************************************ 00:04:51.017 16:06:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:51.017 16:06:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:51.017 16:06:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.017 16:06:22 thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.017 ************************************ 00:04:51.017 START TEST thread_poller_perf 00:04:51.017 ************************************ 00:04:51.017 16:06:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:51.017 [2024-11-20 16:06:22.215005] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:51.017 [2024-11-20 16:06:22.215075] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738615 ] 00:04:51.276 [2024-11-20 16:06:22.294294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.276 [2024-11-20 16:06:22.338347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.276 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:52.213 [2024-11-20T15:06:23.447Z] ====================================== 00:04:52.213 [2024-11-20T15:06:23.447Z] busy:2101460524 (cyc) 00:04:52.213 [2024-11-20T15:06:23.447Z] total_run_count: 5520000 00:04:52.213 [2024-11-20T15:06:23.447Z] tsc_hz: 2100000000 (cyc) 00:04:52.213 [2024-11-20T15:06:23.447Z] ====================================== 00:04:52.213 [2024-11-20T15:06:23.447Z] poller_cost: 380 (cyc), 180 (nsec) 00:04:52.213 00:04:52.213 real 0m1.184s 00:04:52.213 user 0m1.108s 00:04:52.213 sys 0m0.071s 00:04:52.213 16:06:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.213 16:06:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.213 ************************************ 00:04:52.213 END TEST thread_poller_perf 00:04:52.213 ************************************ 00:04:52.213 16:06:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:52.213 00:04:52.213 real 0m2.689s 00:04:52.213 user 0m2.376s 00:04:52.213 sys 0m0.327s 00:04:52.213 16:06:23 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.213 16:06:23 thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.213 ************************************ 00:04:52.213 END TEST thread 00:04:52.213 ************************************ 00:04:52.472 16:06:23 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:52.472 16:06:23 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:52.472 16:06:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.472 16:06:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.472 16:06:23 -- common/autotest_common.sh@10 -- # set +x 00:04:52.472 ************************************ 00:04:52.472 START TEST app_cmdline 00:04:52.472 ************************************ 00:04:52.472 16:06:23 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:52.472 * Looking for test storage... 00:04:52.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:52.472 16:06:23 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.472 16:06:23 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.472 16:06:23 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.472 16:06:23 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.472 16:06:23 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.473 16:06:23 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:52.473 16:06:23 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.473 16:06:23 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.473 --rc genhtml_branch_coverage=1 00:04:52.473 --rc genhtml_function_coverage=1 00:04:52.473 --rc genhtml_legend=1 00:04:52.473 --rc geninfo_all_blocks=1 00:04:52.473 --rc geninfo_unexecuted_blocks=1 00:04:52.473 00:04:52.473 ' 00:04:52.473 16:06:23 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.473 --rc genhtml_branch_coverage=1 00:04:52.473 --rc genhtml_function_coverage=1 00:04:52.473 --rc genhtml_legend=1 00:04:52.473 --rc geninfo_all_blocks=1 00:04:52.473 --rc geninfo_unexecuted_blocks=1 00:04:52.473 00:04:52.473 ' 00:04:52.473 16:06:23 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.473 --rc genhtml_branch_coverage=1 00:04:52.473 --rc genhtml_function_coverage=1 00:04:52.473 --rc genhtml_legend=1 00:04:52.473 --rc geninfo_all_blocks=1 00:04:52.473 --rc geninfo_unexecuted_blocks=1 00:04:52.473 00:04:52.473 ' 00:04:52.473 16:06:23 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.473 --rc genhtml_branch_coverage=1 00:04:52.473 --rc genhtml_function_coverage=1 00:04:52.473 --rc genhtml_legend=1 00:04:52.473 --rc geninfo_all_blocks=1 00:04:52.473 --rc geninfo_unexecuted_blocks=1 00:04:52.473 00:04:52.473 ' 00:04:52.473 16:06:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:52.473 16:06:23 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:52.473 16:06:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1738975 00:04:52.473 16:06:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1738975 00:04:52.473 16:06:23 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1738975 ']' 00:04:52.473 16:06:23 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.473 16:06:23 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.473 16:06:23 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.473 16:06:23 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.473 16:06:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:52.732 [2024-11-20 16:06:23.713098] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:04:52.732 [2024-11-20 16:06:23.713149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738975 ] 00:04:52.732 [2024-11-20 16:06:23.790578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.732 [2024-11-20 16:06:23.830216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.991 16:06:24 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.991 16:06:24 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:52.991 16:06:24 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:52.991 { 00:04:52.991 "version": "SPDK v25.01-pre git sha1 66a383faf", 00:04:52.991 "fields": { 00:04:52.991 "major": 25, 00:04:52.991 "minor": 1, 00:04:52.991 "patch": 0, 00:04:52.991 "suffix": "-pre", 00:04:52.991 "commit": "66a383faf" 00:04:52.991 } 00:04:52.991 } 00:04:53.250 16:06:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:53.250 16:06:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:53.250 16:06:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:53.250 16:06:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:53.250 16:06:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:53.250 16:06:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:53.250 16:06:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.250 16:06:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:53.250 16:06:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:53.250 16:06:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:53.250 request: 00:04:53.250 { 00:04:53.250 "method": "env_dpdk_get_mem_stats", 00:04:53.250 "req_id": 1 00:04:53.250 } 00:04:53.250 Got JSON-RPC error response 00:04:53.250 response: 00:04:53.250 { 00:04:53.250 "code": -32601, 00:04:53.250 "message": "Method not found" 00:04:53.250 } 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:53.250 16:06:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1738975 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1738975 ']' 00:04:53.250 16:06:24 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1738975 00:04:53.509 16:06:24 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:53.509 16:06:24 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.509 16:06:24 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1738975 00:04:53.509 16:06:24 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.509 16:06:24 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.509 16:06:24 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1738975' 00:04:53.509 killing process with pid 1738975 00:04:53.509 16:06:24 app_cmdline -- common/autotest_common.sh@973 -- # kill 1738975 00:04:53.509 16:06:24 app_cmdline -- common/autotest_common.sh@978 -- # wait 1738975 00:04:53.767 00:04:53.767 real 0m1.340s 00:04:53.767 user 0m1.576s 00:04:53.767 sys 0m0.441s 00:04:53.767 16:06:24 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.767 16:06:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:53.767 ************************************ 00:04:53.767 END TEST app_cmdline 00:04:53.767 ************************************ 00:04:53.767 16:06:24 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.767 16:06:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.767 16:06:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.767 16:06:24 -- common/autotest_common.sh@10 -- # set +x 00:04:53.767 ************************************ 00:04:53.767 START TEST version 00:04:53.767 ************************************ 00:04:53.767 16:06:24 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.767 * Looking for test storage... 00:04:53.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:53.767 16:06:24 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.767 16:06:24 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.767 16:06:24 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.027 16:06:25 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.027 16:06:25 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.027 16:06:25 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.027 16:06:25 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.027 16:06:25 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.027 16:06:25 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.027 16:06:25 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.027 16:06:25 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.027 16:06:25 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.027 16:06:25 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.027 16:06:25 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.027 16:06:25 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.027 16:06:25 version -- scripts/common.sh@344 -- # case "$op" in 00:04:54.027 16:06:25 version -- scripts/common.sh@345 -- # : 1 00:04:54.027 16:06:25 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.027 16:06:25 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.027 16:06:25 version -- scripts/common.sh@365 -- # decimal 1 00:04:54.027 16:06:25 version -- scripts/common.sh@353 -- # local d=1 00:04:54.027 16:06:25 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.027 16:06:25 version -- scripts/common.sh@355 -- # echo 1 00:04:54.027 16:06:25 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.027 16:06:25 version -- scripts/common.sh@366 -- # decimal 2 00:04:54.027 16:06:25 version -- scripts/common.sh@353 -- # local d=2 00:04:54.027 16:06:25 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.027 16:06:25 version -- scripts/common.sh@355 -- # echo 2 00:04:54.027 16:06:25 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.027 16:06:25 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.027 16:06:25 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.027 16:06:25 version -- scripts/common.sh@368 -- # return 0 00:04:54.027 16:06:25 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.027 16:06:25 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.027 --rc genhtml_branch_coverage=1 00:04:54.027 --rc genhtml_function_coverage=1 00:04:54.027 --rc genhtml_legend=1 00:04:54.027 --rc geninfo_all_blocks=1 00:04:54.027 --rc geninfo_unexecuted_blocks=1 00:04:54.027 00:04:54.027 ' 00:04:54.027 16:06:25 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.027 --rc genhtml_branch_coverage=1 00:04:54.027 --rc genhtml_function_coverage=1 00:04:54.027 --rc genhtml_legend=1 00:04:54.027 --rc geninfo_all_blocks=1 00:04:54.027 --rc geninfo_unexecuted_blocks=1 00:04:54.027 00:04:54.027 ' 00:04:54.027 16:06:25 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.027 --rc genhtml_branch_coverage=1 00:04:54.027 --rc genhtml_function_coverage=1 00:04:54.027 --rc genhtml_legend=1 00:04:54.027 --rc geninfo_all_blocks=1 00:04:54.027 --rc geninfo_unexecuted_blocks=1 00:04:54.027 00:04:54.027 ' 00:04:54.027 16:06:25 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.027 --rc genhtml_branch_coverage=1 00:04:54.027 --rc genhtml_function_coverage=1 00:04:54.027 --rc genhtml_legend=1 00:04:54.027 --rc geninfo_all_blocks=1 00:04:54.027 --rc geninfo_unexecuted_blocks=1 00:04:54.027 00:04:54.027 ' 00:04:54.027 16:06:25 version -- app/version.sh@17 -- # get_header_version major 00:04:54.027 16:06:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:54.027 16:06:25 version -- app/version.sh@14 -- # cut -f2 00:04:54.027 16:06:25 version -- app/version.sh@14 -- # tr -d '"' 00:04:54.027 16:06:25 version -- app/version.sh@17 -- # major=25 00:04:54.027 16:06:25 version -- app/version.sh@18 -- # get_header_version minor 00:04:54.027 16:06:25 version -- app/version.sh@14 -- # cut -f2 00:04:54.027 16:06:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:54.027 16:06:25 version -- app/version.sh@14 -- # tr -d '"' 00:04:54.027 16:06:25 version -- app/version.sh@18 -- # minor=1 00:04:54.027 16:06:25 version -- app/version.sh@19 -- # get_header_version patch 00:04:54.027 16:06:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:54.027 16:06:25 version -- app/version.sh@14 -- # cut -f2 00:04:54.027 16:06:25 version -- app/version.sh@14 -- # tr -d '"' 00:04:54.027 16:06:25 version -- app/version.sh@19 -- # patch=0 00:04:54.027 16:06:25 version -- app/version.sh@20 -- # get_header_version suffix 00:04:54.027 16:06:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:54.027 16:06:25 version -- app/version.sh@14 -- # cut -f2 00:04:54.028 16:06:25 version -- app/version.sh@14 -- # tr -d '"' 00:04:54.028 16:06:25 version -- app/version.sh@20 -- # suffix=-pre 00:04:54.028 16:06:25 version -- app/version.sh@22 -- # version=25.1 00:04:54.028 16:06:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:54.028 16:06:25 version -- app/version.sh@28 -- # version=25.1rc0 00:04:54.028 16:06:25 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:54.028 16:06:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:54.028 16:06:25 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:54.028 16:06:25 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:54.028 00:04:54.028 real 0m0.249s 00:04:54.028 user 0m0.148s 00:04:54.028 sys 0m0.143s 00:04:54.028 16:06:25 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.028 16:06:25 version -- common/autotest_common.sh@10 -- # set +x 00:04:54.028 ************************************ 00:04:54.028 END TEST version 00:04:54.028 ************************************ 00:04:54.028 16:06:25 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:54.028 16:06:25 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:54.028 16:06:25 -- spdk/autotest.sh@194 -- # uname -s 00:04:54.028 16:06:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:54.028 16:06:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:54.028 16:06:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:54.028 16:06:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:54.028 16:06:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:54.028 16:06:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:54.028 16:06:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.028 16:06:25 -- common/autotest_common.sh@10 -- # set +x 00:04:54.028 16:06:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:54.028 16:06:25 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:54.028 16:06:25 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:54.028 16:06:25 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:54.028 16:06:25 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:54.028 16:06:25 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:54.028 16:06:25 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:54.028 16:06:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:54.028 16:06:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.028 16:06:25 -- common/autotest_common.sh@10 -- # set +x 00:04:54.288 ************************************ 00:04:54.288 START TEST nvmf_tcp 00:04:54.288 ************************************ 00:04:54.288 16:06:25 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:54.288 * Looking for test storage... 00:04:54.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:54.288 16:06:25 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.288 16:06:25 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.288 16:06:25 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.288 16:06:25 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.288 16:06:25 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:54.288 16:06:25 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.288 16:06:25 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.288 --rc genhtml_branch_coverage=1 00:04:54.288 --rc genhtml_function_coverage=1 00:04:54.288 --rc genhtml_legend=1 00:04:54.288 --rc geninfo_all_blocks=1 00:04:54.288 --rc geninfo_unexecuted_blocks=1 00:04:54.288 00:04:54.288 ' 00:04:54.288 16:06:25 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.288 --rc genhtml_branch_coverage=1 00:04:54.288 --rc genhtml_function_coverage=1 00:04:54.288 --rc genhtml_legend=1 00:04:54.288 --rc geninfo_all_blocks=1 00:04:54.288 --rc geninfo_unexecuted_blocks=1 00:04:54.288 00:04:54.288 ' 00:04:54.288 16:06:25 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.288 --rc genhtml_branch_coverage=1 00:04:54.288 --rc genhtml_function_coverage=1 00:04:54.288 --rc genhtml_legend=1 00:04:54.288 --rc geninfo_all_blocks=1 00:04:54.288 --rc geninfo_unexecuted_blocks=1 00:04:54.288 00:04:54.288 ' 00:04:54.288 16:06:25 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.288 --rc genhtml_branch_coverage=1 00:04:54.288 --rc genhtml_function_coverage=1 00:04:54.288 --rc genhtml_legend=1 00:04:54.288 --rc geninfo_all_blocks=1 00:04:54.288 --rc geninfo_unexecuted_blocks=1 00:04:54.288 00:04:54.288 ' 00:04:54.288 16:06:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:54.288 16:06:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:54.288 16:06:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:54.288 16:06:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:54.288 16:06:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.288 16:06:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.288 ************************************ 00:04:54.288 START TEST nvmf_target_core 00:04:54.288 ************************************ 00:04:54.288 16:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:54.549 * Looking for test storage... 00:04:54.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.549 --rc genhtml_branch_coverage=1 00:04:54.549 --rc genhtml_function_coverage=1 00:04:54.549 --rc genhtml_legend=1 00:04:54.549 --rc geninfo_all_blocks=1 00:04:54.549 --rc geninfo_unexecuted_blocks=1 00:04:54.549 00:04:54.549 ' 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.549 --rc genhtml_branch_coverage=1 00:04:54.549 --rc genhtml_function_coverage=1 00:04:54.549 --rc genhtml_legend=1 00:04:54.549 --rc geninfo_all_blocks=1 00:04:54.549 --rc geninfo_unexecuted_blocks=1 00:04:54.549 00:04:54.549 ' 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.549 --rc genhtml_branch_coverage=1 00:04:54.549 --rc genhtml_function_coverage=1 00:04:54.549 --rc genhtml_legend=1 00:04:54.549 --rc geninfo_all_blocks=1 00:04:54.549 --rc geninfo_unexecuted_blocks=1 00:04:54.549 00:04:54.549 ' 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.549 --rc genhtml_branch_coverage=1 00:04:54.549 --rc genhtml_function_coverage=1 00:04:54.549 --rc genhtml_legend=1 00:04:54.549 --rc geninfo_all_blocks=1 00:04:54.549 --rc geninfo_unexecuted_blocks=1 00:04:54.549 00:04:54.549 ' 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.549 16:06:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:54.550 ************************************ 00:04:54.550 START TEST nvmf_abort 00:04:54.550 ************************************ 00:04:54.550 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:54.810 * Looking for test storage... 00:04:54.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.810 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.811 --rc genhtml_branch_coverage=1 00:04:54.811 --rc genhtml_function_coverage=1 00:04:54.811 --rc genhtml_legend=1 00:04:54.811 --rc geninfo_all_blocks=1 00:04:54.811 --rc geninfo_unexecuted_blocks=1 00:04:54.811 00:04:54.811 ' 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.811 --rc genhtml_branch_coverage=1 00:04:54.811 --rc genhtml_function_coverage=1 00:04:54.811 --rc genhtml_legend=1 00:04:54.811 --rc geninfo_all_blocks=1 00:04:54.811 --rc geninfo_unexecuted_blocks=1 00:04:54.811 00:04:54.811 ' 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.811 --rc genhtml_branch_coverage=1 00:04:54.811 --rc genhtml_function_coverage=1 00:04:54.811 --rc genhtml_legend=1 00:04:54.811 --rc geninfo_all_blocks=1 00:04:54.811 --rc geninfo_unexecuted_blocks=1 00:04:54.811 00:04:54.811 ' 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.811 --rc genhtml_branch_coverage=1 00:04:54.811 --rc genhtml_function_coverage=1 00:04:54.811 --rc genhtml_legend=1 00:04:54.811 --rc geninfo_all_blocks=1 00:04:54.811 --rc geninfo_unexecuted_blocks=1 00:04:54.811 00:04:54.811 ' 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:54.811 16:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.387 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:01.387 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:01.388 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:01.388 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:01.388 Found net devices under 0000:86:00.0: cvl_0_0 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:01.388 Found net devices under 0000:86:00.1: cvl_0_1 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:01.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:01.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:05:01.388 00:05:01.388 --- 10.0.0.2 ping statistics --- 00:05:01.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:01.388 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:01.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:01.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:05:01.388 00:05:01.388 --- 10.0.0.1 ping statistics --- 00:05:01.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:01.388 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:01.388 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1742583 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1742583 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1742583 ']' 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.389 16:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.389 [2024-11-20 16:06:32.040540] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:05:01.389 [2024-11-20 16:06:32.040585] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:01.389 [2024-11-20 16:06:32.114284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.389 [2024-11-20 16:06:32.169851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:01.389 [2024-11-20 16:06:32.169895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:01.389 [2024-11-20 16:06:32.169907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:01.389 [2024-11-20 16:06:32.169932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:01.389 [2024-11-20 16:06:32.169940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:01.389 [2024-11-20 16:06:32.171796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.389 [2024-11-20 16:06:32.171901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.389 [2024-11-20 16:06:32.171904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.389 [2024-11-20 16:06:32.323345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.389 Malloc0 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.389 Delay0 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.389 [2024-11-20 16:06:32.403017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.389 16:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:01.389 [2024-11-20 16:06:32.539862] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:03.951 Initializing NVMe Controllers 00:05:03.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:03.951 controller IO queue size 128 less than required 00:05:03.951 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:03.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:03.951 Initialization complete. Launching workers. 00:05:03.951 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37330 00:05:03.951 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37391, failed to submit 62 00:05:03.951 success 37334, unsuccessful 57, failed 0 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:03.951 rmmod nvme_tcp 00:05:03.951 rmmod nvme_fabrics 00:05:03.951 rmmod nvme_keyring 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1742583 ']' 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1742583 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1742583 ']' 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1742583 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1742583 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1742583' 00:05:03.951 killing process with pid 1742583 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1742583 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1742583 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:03.951 16:06:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:05.927 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:05.927 00:05:05.927 real 0m11.338s 00:05:05.927 user 0m11.927s 00:05:05.927 sys 0m5.520s 00:05:05.927 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.927 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:05.927 ************************************ 00:05:05.927 END TEST nvmf_abort 00:05:05.927 ************************************ 00:05:05.927 16:06:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:05.927 16:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:05.927 16:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.927 16:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:05.927 ************************************ 00:05:05.927 START TEST nvmf_ns_hotplug_stress 00:05:05.927 ************************************ 00:05:05.927 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:06.187 * Looking for test storage... 00:05:06.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.187 --rc genhtml_branch_coverage=1 00:05:06.187 --rc genhtml_function_coverage=1 00:05:06.187 --rc genhtml_legend=1 00:05:06.187 --rc geninfo_all_blocks=1 00:05:06.187 --rc geninfo_unexecuted_blocks=1 00:05:06.187 00:05:06.187 ' 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.187 --rc genhtml_branch_coverage=1 00:05:06.187 --rc genhtml_function_coverage=1 00:05:06.187 --rc genhtml_legend=1 00:05:06.187 --rc geninfo_all_blocks=1 00:05:06.187 --rc geninfo_unexecuted_blocks=1 00:05:06.187 00:05:06.187 ' 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.187 --rc genhtml_branch_coverage=1 00:05:06.187 --rc genhtml_function_coverage=1 00:05:06.187 --rc genhtml_legend=1 00:05:06.187 --rc geninfo_all_blocks=1 00:05:06.187 --rc geninfo_unexecuted_blocks=1 00:05:06.187 00:05:06.187 ' 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.187 --rc genhtml_branch_coverage=1 00:05:06.187 --rc genhtml_function_coverage=1 00:05:06.187 --rc genhtml_legend=1 00:05:06.187 --rc geninfo_all_blocks=1 00:05:06.187 --rc geninfo_unexecuted_blocks=1 00:05:06.187 00:05:06.187 ' 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.187 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:06.188 16:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:12.761 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:12.761 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:12.761 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:12.762 Found net devices under 0000:86:00.0: cvl_0_0 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:12.762 Found net devices under 0000:86:00.1: cvl_0_1 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:12.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:12.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:05:12.762 00:05:12.762 --- 10.0.0.2 ping statistics --- 00:05:12.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.762 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:12.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:12.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:05:12.762 00:05:12.762 --- 10.0.0.1 ping statistics --- 00:05:12.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.762 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1746735 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1746735 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1746735 ']' 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:12.762 [2024-11-20 16:06:43.457068] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:05:12.762 [2024-11-20 16:06:43.457119] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:12.762 [2024-11-20 16:06:43.536218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.762 [2024-11-20 16:06:43.577026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:12.762 [2024-11-20 16:06:43.577061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:12.762 [2024-11-20 16:06:43.577068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:12.762 [2024-11-20 16:06:43.577075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:12.762 [2024-11-20 16:06:43.577080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:12.762 [2024-11-20 16:06:43.578537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.762 [2024-11-20 16:06:43.578644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.762 [2024-11-20 16:06:43.578645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:12.762 [2024-11-20 16:06:43.892671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.762 16:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:13.020 16:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:13.278 [2024-11-20 16:06:44.286109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:13.278 16:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:13.278 16:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:13.536 Malloc0 00:05:13.536 16:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:13.795 Delay0 00:05:13.795 16:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.053 16:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:14.311 NULL1 00:05:14.311 16:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:14.311 16:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:14.311 16:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1747078 00:05:14.311 16:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:14.311 16:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.683 Read completed with error (sct=0, sc=11) 00:05:15.683 16:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.941 16:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:15.941 16:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:15.941 true 00:05:15.941 16:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:15.941 16:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.873 16:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.131 16:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:17.131 16:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:17.131 true 00:05:17.131 16:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:17.131 16:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.387 16:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.645 16:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:17.645 16:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:17.902 true 00:05:17.902 16:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:17.902 16:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.844 16:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.101 16:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:19.101 16:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:19.359 true 00:05:19.359 16:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:19.359 16:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.617 16:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.617 16:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:19.617 16:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:19.875 true 00:05:19.875 16:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:19.875 16:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.133 16:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.390 16:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:20.390 16:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:20.390 true 00:05:20.390 16:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:20.390 16:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.648 16:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.906 16:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:20.906 16:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:21.164 true 00:05:21.164 16:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:21.164 16:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.098 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.098 16:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.098 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.356 16:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:22.356 16:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:22.613 true 00:05:22.613 16:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:22.613 16:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.871 16:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.871 16:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:22.871 16:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:23.129 true 00:05:23.129 16:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:23.129 16:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.502 16:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.502 16:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:24.502 16:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:24.760 true 00:05:24.760 16:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:24.760 16:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.692 16:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.692 16:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:25.692 16:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:25.950 true 00:05:25.950 16:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:25.950 16:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.208 16:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.466 16:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:26.466 16:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:26.466 true 00:05:26.466 16:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:26.466 16:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.838 16:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.838 16:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:27.838 16:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:28.096 true 00:05:28.096 16:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:28.096 16:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.029 16:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.029 16:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:29.029 16:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:29.286 true 00:05:29.286 16:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:29.286 16:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.544 16:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.544 16:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:29.544 16:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:29.801 true 00:05:29.801 16:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:29.801 16:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.058 16:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.350 [2024-11-20 16:07:01.347565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.347650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.347698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.347743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.347795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.347843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.347884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.347924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.347970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.348995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.349037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.349081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.349121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.349160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.349196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.349235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.349272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.349309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.349358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.350 [2024-11-20 16:07:01.349404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.349985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.350983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.351979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.351 [2024-11-20 16:07:01.352616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.352658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.352700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.352740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.352789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.352838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.352880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.352923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.353725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.353774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.353820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.353864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.353905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.353944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.353989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.354983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.355975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.352 [2024-11-20 16:07:01.356739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.356782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.356826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.356872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.356917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.356963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.357968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.358987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.359978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.353 [2024-11-20 16:07:01.360749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.360794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.360839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.360875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.360913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.360950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.360991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.361977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.362019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.362061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.362103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.362146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.362187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.362239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.362285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.363982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.364027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.364069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.364113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.364158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.364209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.364260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.364302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.364350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.364400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.364444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.354 [2024-11-20 16:07:01.364491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.364537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.364576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.364621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.364667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.364713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.364758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.364805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.364848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.364892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.364933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.364977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.365971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.366994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.355 [2024-11-20 16:07:01.367874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.367918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.367958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.367992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.368029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.368071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.368115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.368159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.368195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.368246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.368290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.368325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.368366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.368403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.368437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.368476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.369962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.370979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.371023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.371069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.371113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.371159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.371209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.371255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.371298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.371344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.371394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.356 [2024-11-20 16:07:01.371437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.371481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.371530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.371573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.371621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.371667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.371710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.371756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.371802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.371846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.371896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.372799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.373964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.374975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.375021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.375069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.375114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.375160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.375212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.375256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.357 [2024-11-20 16:07:01.375304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.375972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.376981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.377977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 16:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:30.358 [2024-11-20 16:07:01.378349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.358 [2024-11-20 16:07:01.378651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 16:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:30.359 [2024-11-20 16:07:01.378696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.378734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.378771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:30.359 [2024-11-20 16:07:01.379613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.379672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.379716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.379759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.379805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.379851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.379898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.379945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.380987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.381974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.382962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.383001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.359 [2024-11-20 16:07:01.383041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.383973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.384995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.385039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.385075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.385120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.385159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.385196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.385243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.386988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.387028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.387076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.387105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.387143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.360 [2024-11-20 16:07:01.387179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.387961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.388893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.361 [2024-11-20 16:07:01.389838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.389873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.389913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.389953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.389995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.390999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.391683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.392592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.392636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.392674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.392717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.392753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.392796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.392839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.392883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.392935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.392982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.393960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.394007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.362 [2024-11-20 16:07:01.394051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.394972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.395956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.396991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.363 [2024-11-20 16:07:01.397694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.397734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.397780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.397828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.397871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.397915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.397954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.397995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.398039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.398079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.398117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.398155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.398193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.398242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.398286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.398331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.399979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.400983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.364 [2024-11-20 16:07:01.401812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.401849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.401894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.402773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.403994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.404983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.365 [2024-11-20 16:07:01.405665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.405705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.405744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.405783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.405824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.405863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.405904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.405941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.406982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.407965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.408868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.409655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.409706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.409764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.409809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.409850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.409894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.409941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.409985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.410030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.410086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.366 [2024-11-20 16:07:01.410131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.410987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.411969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.412967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.413011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.413056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.413107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.413152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.367 [2024-11-20 16:07:01.413200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.413252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.413296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.413781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.413834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.413878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.413920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.413964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.414980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.415964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.416974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.417018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.417062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.417115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.417165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.368 [2024-11-20 16:07:01.417213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.417969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.418992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.419032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.419071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.419111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.419152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.419192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.419238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.419276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.420959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.369 [2024-11-20 16:07:01.421622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.421653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.421694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.421729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.421766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.421806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.421848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.421894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.421938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.421978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.422958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.423720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.424979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.370 [2024-11-20 16:07:01.425600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.425642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.425676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.425716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.425757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.425794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.425833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.425876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.425918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.425956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.425997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.426856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.427993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.428990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.371 [2024-11-20 16:07:01.429030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.429663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:30.372 [2024-11-20 16:07:01.430434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.430482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.430530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.430578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.430625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.430670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.430723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.430765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.430812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.430859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.430904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.430949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.430994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.431959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.372 [2024-11-20 16:07:01.432909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.432952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.432996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.433970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.434981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.435993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.436043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.436089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.436132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.436178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.436227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.436269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.436314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.373 [2024-11-20 16:07:01.436362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.436409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.436454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.436502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.436546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.436591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.436640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.436687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.436732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.436775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.436819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.436868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.436917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.436962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.437996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.438970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.374 [2024-11-20 16:07:01.439930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.440725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.440775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.440827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.440869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.440916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.440964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.441990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.442961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.443961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.444004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.444048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.444093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.444139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.444184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.375 [2024-11-20 16:07:01.444233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.444282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.444748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.444795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.444834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.444875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.444918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.444965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.445963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.446961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.447957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.448000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.448040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.448080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.448126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.448169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.448215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.376 [2024-11-20 16:07:01.448262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.448982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.449991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.450039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.450083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.450129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.450177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.450226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.450270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.450314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.451999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.452039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.452081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.452123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.452166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.452208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.452246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.452283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.377 [2024-11-20 16:07:01.452324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.452994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.453992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.454555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.378 [2024-11-20 16:07:01.455658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.455701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.455743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.455793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.455838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.455884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.455932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.455977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.456972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.457987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.458992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.459040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.459086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.379 [2024-11-20 16:07:01.459130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.459960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.460603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.460638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.460680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.460718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.460759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.460799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.460839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.460880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.460920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.460957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.460994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.461986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.462987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.463027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.463058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.463101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.463139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.463178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.463226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.463275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.380 [2024-11-20 16:07:01.463315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.463980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.464021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.464060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.464100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.464139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.464628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.464680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.464733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.464779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.464820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.464871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.464914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.464959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.465970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.381 [2024-11-20 16:07:01.466981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.467962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.468992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.469965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.470958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.471003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.471050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.382 [2024-11-20 16:07:01.471093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.471969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.472964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.473003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.473042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.473078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.473122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.473164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.473212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.473259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.473305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.473349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.473392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.473439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.473482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.474979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.475030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.475072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.475118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.475168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.475215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.475253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.475289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.475319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.383 [2024-11-20 16:07:01.475364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.475968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.476989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.477961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.478009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.478054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.478095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.478139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.478188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.478237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.478287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.478328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.478358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.478399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.478435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.384 [2024-11-20 16:07:01.478472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.478513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.478551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.478589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.478631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.478671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.478712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.478753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.478789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.478827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.478865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.478904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.478941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.478981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.479612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:30.385 [2024-11-20 16:07:01.480108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.480999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.481965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.482002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.482040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.482085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.482123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.482160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.385 [2024-11-20 16:07:01.482200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.482831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.483648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.483701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.483747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.483793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.483836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.483880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.483930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.483976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.484956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.485963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.486002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.486043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.386 [2024-11-20 16:07:01.486084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.486975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.487961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.488976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.489011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.489054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.489092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.489133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.489177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.489220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.489258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.489307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.490146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.490188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.490234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.490278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.490320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.490374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.387 [2024-11-20 16:07:01.490424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.490471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.490513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.490564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.490614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.490658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.490701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.490743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.490786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.490833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.490876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.490924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.490966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.491965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.492863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.493746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.494227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.494281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.494332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.494380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.388 [2024-11-20 16:07:01.494425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.494472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.494521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.494566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.494609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.494657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.494705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.494750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.494797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.494845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.494891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.494937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.494984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.495978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.496915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.497097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.497142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.497188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.497236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.497284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.497334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.497379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.497421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.497463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.497497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.497543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.389 [2024-11-20 16:07:01.497588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.497627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.497664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.497710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.497750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.497781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.497821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.497858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.497908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.497949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.497987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.498960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.499790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.500556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.500600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.500641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.500682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.500725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.500763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.500804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.390 [2024-11-20 16:07:01.500841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.500879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.500928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.500969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.501960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.502989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.503987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.504028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.504073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.504113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.504159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.504274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.504318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.504365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.504401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.504442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.504480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.504518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.504559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.391 [2024-11-20 16:07:01.504596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.504636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.504678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.504715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.504753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.504794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.504832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.504874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.504919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.504970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.505965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.506010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.506055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.506099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.506153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.506207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.506254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.506298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.507977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.508972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.509014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.509058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.509096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.509135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.509180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.509225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.509277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.509316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.509354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.392 [2024-11-20 16:07:01.509391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.509426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.509465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.509503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.509546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.509579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.509619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.509655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.509696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.509879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.509924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.509966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.510974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.511984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.512645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.513515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.513568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.513611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.513653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.513703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.513751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.513798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.513848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.513892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.513937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.513982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.514028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.514071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.514115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.514159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.514212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.514256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.514303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.514357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.514402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.514448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.514496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.393 [2024-11-20 16:07:01.514541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.514583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.514631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.514680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.514723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.514768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.514818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.514867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.514910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.514954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.515979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.516977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.517024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.517071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.517121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.517168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.517221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.517704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.517758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.517805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.517851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.517900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.517959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.518961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.519002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.519044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.519085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.519121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.394 [2024-11-20 16:07:01.519164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.519982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.520966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.521970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.395 [2024-11-20 16:07:01.522633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.522682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.522726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.522770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.522827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.522874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.522920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.522961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.523011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.523060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.523106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.523160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.523211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.523258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.523304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.523347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.523831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.523873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.523915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.523959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.524994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.525959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.526612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.527078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.527122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.527165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.527212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.527252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.527302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.527343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.527374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.527417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.527456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.527497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.396 [2024-11-20 16:07:01.527538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.527581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.527620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.527657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.527706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.527747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.527788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.527826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.527858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.527898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.527940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.527981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.528975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.529776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:30.397 [2024-11-20 16:07:01.530562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.530607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.530652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.530692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.530724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.530764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.530806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.530849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.530889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.530927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.530974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.531958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.532002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.532047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.532097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.532146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.532194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.532244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.397 [2024-11-20 16:07:01.532291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.532987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.533967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.534988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.535969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.536970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.537012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.537056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.537098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.537151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.537195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.537246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.537286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.537324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.537370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.398 [2024-11-20 16:07:01.537630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.537673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.537715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.537755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.537798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.537841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.537874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.537914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.537954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.537997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.538977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.539975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.540028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.540076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.540120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.540165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.540198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.540243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.540291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.540330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.540366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.540408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.540450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.541964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.542012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.542060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.542106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.542155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.542206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.542260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.542306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.542350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.542397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.399 [2024-11-20 16:07:01.542450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.542495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.542538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.542583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.542636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.542681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.542728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.542779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.542824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.542875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.542919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.542964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.543989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.544995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.545977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.400 [2024-11-20 16:07:01.546991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.547740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.547793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.547834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.547874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.547914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.547956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.548999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.549979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.550995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.685 [2024-11-20 16:07:01.551949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.551998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.552983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.553029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.553076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.553120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.553164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.553217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.553264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.553304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.553812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.553866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.553912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.553960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.554977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.555977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.556018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.556056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.556094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.556131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.556165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.556212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.686 [2024-11-20 16:07:01.556259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.556298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.556339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.556381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.556424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.556466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.557962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.558993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.559934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.560849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.561385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.561436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.561482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.561522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.687 [2024-11-20 16:07:01.561559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.561594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.561635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.561679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.561715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.561761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.561803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.561841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.561879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.561919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.561965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.562979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.563991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.564957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.688 [2024-11-20 16:07:01.565572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.565614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.565652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.565686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.565724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.565763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.565805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.565848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.565888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.565932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.565972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.566841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.567653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.567706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.567754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.567798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.567843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.567887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.567937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.567979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 true 00:05:30.689 [2024-11-20 16:07:01.568160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.568999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.569039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.569078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.569121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.569156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.569200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.569245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.569282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.569311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.569350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.689 [2024-11-20 16:07:01.569386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.569998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.570961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.571965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.572960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.690 [2024-11-20 16:07:01.573668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.573709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.573759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.573801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.573842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.573885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.573929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.573976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.574967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.575981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.576917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.577443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.577497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.577544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.577586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.577629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.577673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.577720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.577766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.577810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.577855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.577914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.577957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.578000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.578051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.578096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.578143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.578184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.691 [2024-11-20 16:07:01.578220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.578973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.579965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.580015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.580062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.580107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:30.692 [2024-11-20 16:07:01.580881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.580934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.580977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.581972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.692 [2024-11-20 16:07:01.582604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.582645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.582682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.582720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.582758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.582796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.582836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.582875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.582915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.582959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.582996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.583994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.584993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.585965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.586972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.587017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.587055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.693 [2024-11-20 16:07:01.587095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.587139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.587180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.587222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.587269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.587311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.587348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.587379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.587420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.587459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.587502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 16:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:30.694 [2024-11-20 16:07:01.588079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 16:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.694 [2024-11-20 16:07:01.588124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.588958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.589990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.694 [2024-11-20 16:07:01.590852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.591969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.592994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.593702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.594503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.594553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.594597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.594641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.594694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.594741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.594785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.594832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.594877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.594920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.594967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.695 [2024-11-20 16:07:01.595697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.595737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.595777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.595810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.595847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.595890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.595926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.595964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.596986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.597968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.598986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.599025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.599062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.599106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.599144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.599183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.599237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.599270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.599306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.696 [2024-11-20 16:07:01.599344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.599970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.600732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.600780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.600825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.600869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.600912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.600957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.601960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.602969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.603962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.604008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.604055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.604110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.604153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.697 [2024-11-20 16:07:01.604197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.604994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.605042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.605084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.605659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.605703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.605746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.605790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.605821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.605857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.605895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.605933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.605971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.606990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.607957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.608002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.608045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.608090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.608134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.608187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.608238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.608283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.608332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.608379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.608563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.608615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.698 [2024-11-20 16:07:01.608662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.608706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.608751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.608793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.608828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.608867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.608909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.608947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.608985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.609959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.610968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.611969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.699 [2024-11-20 16:07:01.612567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.612612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.612652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.612692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.612728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.612766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.612810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.612846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.612882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.612920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.612960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.612999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.613974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.614020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.614066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.614122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.614166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.614214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.614259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.614302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.614347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.614392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.614435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.614479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.614529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.615959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.700 [2024-11-20 16:07:01.616942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.616984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.617836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.618963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.619994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.620636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.621424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.621468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.621513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.621559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.621601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.621646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.621691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.621740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.621787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.621830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.621876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.701 [2024-11-20 16:07:01.621923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.621970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.622977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.623977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.624973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.702 [2024-11-20 16:07:01.625720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.625761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.625802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.625847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.625888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.625926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.625963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.626962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.627742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.627786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.627822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.627865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.627911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.627952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.627992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.628992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.629991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.630035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.630083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.630132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.630175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.630225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.630269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.630315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.630358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.630405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.630454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.630642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.703 [2024-11-20 16:07:01.630688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.630741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.630782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 Message suppressed 999 times: [2024-11-20 16:07:01.630828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 Read completed with error (sct=0, sc=15) 00:05:30.704 [2024-11-20 16:07:01.630880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.630915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.630956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.630995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.631964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.632960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.633010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.633053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.633098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.633148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.633193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.633237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.633286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.704 [2024-11-20 16:07:01.634773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.634815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.634845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.634882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.634920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.634955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.634996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.635984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.636973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.637971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.638994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.705 [2024-11-20 16:07:01.639035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.639075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.639113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.639151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.639189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.639232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.639269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.639314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.639367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.639411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.639459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.639507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.639551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.640964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.641966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.642990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.643978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.644024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.644071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.706 [2024-11-20 16:07:01.644125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.644985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.645826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.646619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.646672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.646716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.646759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.646804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.646850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.646895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.646946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.646992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.647995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.707 [2024-11-20 16:07:01.648954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.648993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.649992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.650960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.651993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.652036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.652084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.652131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.652178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.652971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.653953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.708 [2024-11-20 16:07:01.654002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.654990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.655990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.656998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.709 [2024-11-20 16:07:01.657751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.657794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.657839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.657885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.657930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.657972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.658726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.658777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.658826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.658876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.658924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.658968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.659998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.660994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.661992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.710 [2024-11-20 16:07:01.662817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.662863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.662907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.662941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.662981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.663975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.664016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.664056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.664095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.664137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.664174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.664216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.665966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.666990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.711 [2024-11-20 16:07:01.667603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.667641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.667684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.667874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.667916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.667953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.667989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.668997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.669994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.670535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.671999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.712 [2024-11-20 16:07:01.672833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.672878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.672923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.672976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.673982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.674961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.675970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.676017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.676062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.676108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.676157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.676199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.676248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.676291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.676320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.676358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.676399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.676436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.713 [2024-11-20 16:07:01.676474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.676515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.676560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.676601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.676643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.676685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.676726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.676765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.676798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.676836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.676872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.676923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.676961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.677841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.677892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.677939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.677982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.678994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.679986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:30.714 [2024-11-20 16:07:01.680868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.680985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.714 [2024-11-20 16:07:01.681613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.681658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.681699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.681751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.681797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.681840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.681889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.681931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.681978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.682992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.683032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.683076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.683116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.683155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.683194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.683243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.683283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.683319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.683361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.683405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.684990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.685968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.715 [2024-11-20 16:07:01.686567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.686603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.686645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.686686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.686729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.686767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.686806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.686846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.686884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.686926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.687986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.688960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.689729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.690523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.690575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.690625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.690666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.690713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.690759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.690804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.690848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.690894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.690941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.690985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.716 [2024-11-20 16:07:01.691633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.691677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.691730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.691777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.691820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.691867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.691911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.691955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.692997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.693034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.693074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.693116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.693163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.693199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.693239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.693278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.693325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.693796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.693846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.693887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.693925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.693963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.694993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.695998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.696034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.696074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.696112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.696147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.696189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.717 [2024-11-20 16:07:01.696232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.696271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.696309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.696350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.696392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.696599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.696640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.696676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.696718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.696762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.696823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.696872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.696916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.696964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.697963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.698965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.699005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.699043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.699083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.699136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.699174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.699214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.699255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.699296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.699336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.699382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.699427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.699961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.718 [2024-11-20 16:07:01.700714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.700764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.700805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.700854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.700903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.700943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.700987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.701992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.702752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.703979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.704983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.705032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.705069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.705099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.705142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.705179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.705224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.705271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.705311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.705354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.705394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.705446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.705488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.719 [2024-11-20 16:07:01.705527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.705570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.705612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.705650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.705692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.705730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.705768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.705811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.705850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.705887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.705926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.705964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.706004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.706789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.706843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.706891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.706939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.706984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.707999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.708966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.709999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.720 [2024-11-20 16:07:01.710690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.710739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.710785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.710829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.710876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.710938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.710985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.711969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.712013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.712053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.712101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.712133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.712171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.712219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.712260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.712299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.712338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.712378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.712416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.712457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.712496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.713963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.714978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.721 [2024-11-20 16:07:01.715706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.715748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.715788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.715827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.715870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.715911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.715953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.715989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.716977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.717986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.718955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.719768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.719821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.719867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.719912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.719957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.720000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.720042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.720094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.720140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.720188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.720239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.720288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.720333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.722 [2024-11-20 16:07:01.720378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.720425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.720474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.720523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.720570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.720616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.720668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.720714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.720757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.720802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.720850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.720893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.720940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.720988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.721989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.722978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.723992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.724972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.725002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.723 [2024-11-20 16:07:01.725045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.725080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.725128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.725172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.725217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.725261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.725303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.725342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.725381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.725420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.725458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.726988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.727972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.728970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.729013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.729056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.729097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.729667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.729710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.729748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.729791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.729833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.729879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.729920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.729970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.730956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.731010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.731056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.724 [2024-11-20 16:07:01.731106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.731958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:30.725 [2024-11-20 16:07:01.732602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.732990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.733980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.734984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.735023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.735073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.735113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.735158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.735199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.735246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.735288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.735327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.735385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.735422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.735460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.736269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.736314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.736353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.736394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.736435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.736473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.725 [2024-11-20 16:07:01.736513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.736550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.736587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.736630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.736675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.736724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.736770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.736815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.736871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.736916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.736959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.737971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.738955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.739968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.726 [2024-11-20 16:07:01.740854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.727 [2024-11-20 16:07:01.740911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.727 [2024-11-20 16:07:01.740959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.727 [2024-11-20 16:07:01.741004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.727 [2024-11-20 16:07:01.741051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.727 [2024-11-20 16:07:01.741100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.727 [2024-11-20 16:07:01.741143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.727 [2024-11-20 16:07:01.741186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.727 [2024-11-20 16:07:01.741227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.727 [2024-11-20 16:07:01.741270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.727 [2024-11-20 16:07:01.741312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.727 [2024-11-20 16:07:01.741354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.741963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.742784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.742831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.742879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.742927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.742979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.743990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.728 [2024-11-20 16:07:01.744035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.744977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.745965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.746999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.729 [2024-11-20 16:07:01.747681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.747721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.747762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.747808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.747847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.747892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.747934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.747977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.748020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.748051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.748097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.748134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.748179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.748218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.748263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.748302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.748342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.748380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.748421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.748460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.749974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.750964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.751005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.751047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.751087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.751127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.751163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.751207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.751250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.751289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.751329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.751367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.730 [2024-11-20 16:07:01.751404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.751441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.751480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.751518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.751560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.751610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.751657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.751708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.751752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.751794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.751844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.751889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.751937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.751981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.752973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.753983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.731 [2024-11-20 16:07:01.754851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.754896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.755426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.755476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.755522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.755565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.755614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.755659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.755702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.755749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.755800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.755849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.755891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.755940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.755989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.756978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.757022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.757065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.757103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.757144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.757183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.757227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.757265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.757305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.757340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.732 [2024-11-20 16:07:01.757377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.757985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.758021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.758060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.758101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.758141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.758923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.758974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.759969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.760983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.761994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.762033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.762073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.733 [2024-11-20 16:07:01.762117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.762967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.763984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.764544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.765991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.766042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.766092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.734 [2024-11-20 16:07:01.766136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.766966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.767981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.768982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.769956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.770006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.770053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.770095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.770128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.770167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.770210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.770247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.770285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.770328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.770372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.735 [2024-11-20 16:07:01.770412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 [2024-11-20 16:07:01.770973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:30.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.736 16:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.011 [2024-11-20 16:07:01.974463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.974526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.974557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.974596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.974631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.974671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.974709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.974747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.974785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.974823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.974863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.974903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.974946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.974991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.975956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.976000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.011 [2024-11-20 16:07:01.976044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.976982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.977997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.978992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.979836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.980355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.980403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.980452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.980492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.980534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.980581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.012 [2024-11-20 16:07:01.980623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.980654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.980691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.980731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.980767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.980807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.980845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.980882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.980920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.980957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.980990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.981989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.982987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.983745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.983779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.983821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.983864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.983904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.983943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.983983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.984995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.985037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.985080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.985133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.985181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.985227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.985268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.013 [2024-11-20 16:07:01.985315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.985991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.986999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.987966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.988966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.014 [2024-11-20 16:07:01.989902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.989949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.989991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.990976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.991987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.992968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.993013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.993061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.993821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.993871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.993908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.993947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.993987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.015 [2024-11-20 16:07:01.994811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.994849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.994888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.994927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.994973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.995959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.996992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.997967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.016 [2024-11-20 16:07:01.998632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.998675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.998725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.998773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.998825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.998870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.998913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.998963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.999007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.999054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.999098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.999140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.999185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.999242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.999287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.999330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.999373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.999421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.999463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:01.999961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.000985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.001962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.002959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.017 [2024-11-20 16:07:02.003516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.003559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.003600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.004988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 16:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:31.018 [2024-11-20 16:07:02.005552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 16:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:31.018 [2024-11-20 16:07:02.005913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.005996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.006860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.007449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.007499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.007543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.007586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.007629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.007680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.007725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.007770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.007814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.007868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.007915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.007960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.018 [2024-11-20 16:07:02.008803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.008852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.008897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.008943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.008992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.009978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:31.019 [2024-11-20 16:07:02.010783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.010973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.011999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.012972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.013014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.013052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.019 [2024-11-20 16:07:02.013096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.013971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.014997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.015981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.016743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.017963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.018010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.018055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.018097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.018149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.020 [2024-11-20 16:07:02.018191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.018958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.019936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.020982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.021971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.022012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.022051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.022096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.022130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.022167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.022212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.022258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.022305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.022345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.022387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.021 [2024-11-20 16:07:02.022426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.022468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.022507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.022550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.022593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.022632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.022673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.022710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.022748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.022783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.022822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.022868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.023661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.023709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.023755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.023804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.023847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.023892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.023945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.023988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.024994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.025993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.026951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.027001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.027047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.027093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.027139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.027185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.027233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.027277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.027324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.027367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.027418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.027462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.027506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.022 [2024-11-20 16:07:02.027555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.027597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.027642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.027692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.027736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.027778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.027825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.027871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.027920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.027965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.028960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.029002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.029044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.029084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.029129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.029170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.029215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.029270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.030983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.031987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.023 [2024-11-20 16:07:02.032570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.032612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.032651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.032694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.032734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.032770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.032805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.032847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.033977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.034967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.035783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.036981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.037031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.037073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.037118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.037164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.037213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.037256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.037305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.037346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.037393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.037439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.037486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.024 [2024-11-20 16:07:02.037532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.037580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.037625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.037673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.037720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.037767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.037811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.037853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.037903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.037945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.037990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.038983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.039020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.039056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.039908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.039957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.040985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.041972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.042013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.042051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.042087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.025 [2024-11-20 16:07:02.042131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.042984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.043988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.044970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.045009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.045051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.045092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.045141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.045183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.045237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.045283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.045326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.045371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.045420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.045464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.045510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.045569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.046968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.047011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.047040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.047080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.047119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.026 [2024-11-20 16:07:02.047162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.047977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.048994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.049970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.050988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.051025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.051066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.051109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.051149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.051189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.051238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.051280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.051324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.051361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.027 [2024-11-20 16:07:02.051400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.051439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.051473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.051514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.051560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.051608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.051654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.051699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.051744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.051794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.051837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.052617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.052666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.052711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.052760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.052803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.052847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.052894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.052938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.052978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.053996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.054972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.055989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.056034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.056082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.056139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.056183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.056232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.056275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.056319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.056363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.056408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.056457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.028 [2024-11-20 16:07:02.056500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.056541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.056574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.056615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.056656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.056692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.056732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.056774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.056816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.056857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.056894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.056932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.056974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.057014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.057636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.057681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.057722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.057762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.057803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.057844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.057883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.057921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.057962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.057999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.058984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.059969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.060009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.060049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.060099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.060145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.060189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.029 [2024-11-20 16:07:02.060234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:31.030 [2024-11-20 16:07:02.060869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.060985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.061582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.062950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.063966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.064008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.064049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.064092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.064132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.064173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.064222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.064260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.064298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.064336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.064376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.030 [2024-11-20 16:07:02.064416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.064457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.064502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.064542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.064582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.064624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.064660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.064698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.064733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.064775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.064816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.064998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.065958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.066997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.067691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.068515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.068565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.068613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.068661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.068711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.068757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.068798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.068847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.068893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.068936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.068981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.069032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.069079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.069124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.069176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.069227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.069269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.069319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.069362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.069409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.069458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.069501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.031 [2024-11-20 16:07:02.069546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.069592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.069642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.069686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.069732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.069774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.069816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.069866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.069912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.069955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.069986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.070970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.071974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.072990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.032 [2024-11-20 16:07:02.073909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.073949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.073987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.074027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.074069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.074113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.074153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.075988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.076971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.077960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.078965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.079003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.079044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.033 [2024-11-20 16:07:02.079074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.079118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.079155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.079193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.079236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.079276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.079316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.079354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.079393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.079433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.079481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.079525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.079572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.079605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.080960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.081987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.082955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.083144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.083184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.083231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.083272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.083313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.083353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.083395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.083430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.034 [2024-11-20 16:07:02.083468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.083509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.083549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.083588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.083631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.083677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.083719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.083774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.083820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.083866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.083917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.083961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.084010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.084057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.084106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.084638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.084687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.084735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.084778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.084834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.084880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.084926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.084974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.085974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.086973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.087968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.088010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.088056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.088107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.088149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.088196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.088249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.088296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.088339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.035 [2024-11-20 16:07:02.088384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.088975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.089973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.090014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.090055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.090094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.090134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.090922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.090976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.091965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.092970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.093010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.093052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.093091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.093131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.036 [2024-11-20 16:07:02.093169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.093992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.094994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.095999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.096043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.096086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.096129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.096175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.096221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.096263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.096298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.096336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.096377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.096417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.096458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.096499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.097984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.098028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.098073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.098119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.098164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.098210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.098262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.098311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.098355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.037 [2024-11-20 16:07:02.098398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.098986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.099962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.100968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.101983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.038 [2024-11-20 16:07:02.102675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.102713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.102755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.102800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.102841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.102882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.103780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.103833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.103878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.103923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.103967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.104988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.105993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.106997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.107041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.107088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.107137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.039 [2024-11-20 16:07:02.107180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.107228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.107275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.107319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.107363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.107412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.107869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.107916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.107957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.107994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.108979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.109968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.110959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:31.040 [2024-11-20 16:07:02.111093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.040 [2024-11-20 16:07:02.111628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.111661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.111703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.111747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.111787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.111828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.111871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.111913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.111953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.111999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.112974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.113023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.113068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.113113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.113169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.113217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.113260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.113306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.113354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.113397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.113446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.113492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.114960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.115990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.116043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.116084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.116129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.116176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.116229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.041 [2024-11-20 16:07:02.116277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.116978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.117971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.118972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.119649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.120985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.121026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.121066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.121108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.121151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.121189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.042 [2024-11-20 16:07:02.121237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.121992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.122953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.123993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.124968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.125013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.125059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.043 [2024-11-20 16:07:02.125099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.125975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.126763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.126812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.126861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.126904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.126948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.126994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.127972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.128990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.129038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.129082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.129127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.129172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.129219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.129266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.129312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.129360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.129409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.129462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.129507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.044 [2024-11-20 16:07:02.129710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.129757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.129803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.129847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.129895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.129939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.129987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.130962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.131001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.131041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.131083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.131123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.131161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.131206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.131247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.131290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.131880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.131925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.131966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.132963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.133991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.134033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.045 [2024-11-20 16:07:02.134075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.134957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.135894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.136982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.137970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.138978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.139020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.139055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.046 [2024-11-20 16:07:02.139092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.139972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.140787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.141551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.141600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.141646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.141692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.141743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.141788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.141837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.141883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.141931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.141979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.142997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.143972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.144016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.144065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.144111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.144162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.144213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.144260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.144310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.144498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.144546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.144595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.047 [2024-11-20 16:07:02.144640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.144691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.144736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.144782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.144826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.144878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.144921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.144970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.145965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.146969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.147010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.147049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.147091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.147130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.147169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.147959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.148969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.149007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.149049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.149097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.149136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.149177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.149226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.149267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.149302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.149340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.149379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.048 [2024-11-20 16:07:02.149417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.149463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.149506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.149544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.149590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.149631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.149673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.149714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.149751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.149791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.149831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.149875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.149917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.149960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.150997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.151977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.152959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.153002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.153041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.153084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.153124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.153162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.153213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.153254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.153793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.153849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.153902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.153961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.049 [2024-11-20 16:07:02.154845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.154893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.154945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.154994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.155988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.156982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.157027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.157069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.157106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.157150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.157200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.157859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.157909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.157952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.157996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.158958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.159963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.160007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.160046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.160091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.160128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.160160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.050 [2024-11-20 16:07:02.160197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.051 [2024-11-20 16:07:02.160242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:31.051 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:31.051 true 00:05:31.051 16:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:31.051 16:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.981 16:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.238 16:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:32.238 16:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:32.496 true 00:05:32.496 16:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:32.496 16:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.753 16:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.010 16:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:33.010 16:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:33.010 true 00:05:33.010 16:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:33.010 16:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.380 16:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.380 16:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:34.380 16:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:34.638 true 00:05:34.638 16:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:34.638 16:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.568 16:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.568 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.568 16:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:35.569 16:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:35.825 true 00:05:35.826 16:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:35.826 16:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.082 16:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.339 16:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:36.339 16:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:36.339 true 00:05:36.596 16:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:36.596 16:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.526 16:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.784 16:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:37.784 16:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:38.040 true 00:05:38.040 16:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:38.040 16:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.972 16:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.972 16:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:38.972 16:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:39.228 true 00:05:39.228 16:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:39.228 16:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.485 16:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.742 16:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:39.742 16:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:39.742 true 00:05:39.742 16:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:39.742 16:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.112 16:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.112 16:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:41.112 16:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:41.112 true 00:05:41.112 16:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:41.112 16:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.369 16:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.626 16:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:41.627 16:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:41.884 true 00:05:41.884 16:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:41.884 16:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.817 16:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.116 16:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:43.116 16:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:43.410 true 00:05:43.410 16:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:43.410 16:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.342 16:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.342 16:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:44.342 16:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:44.600 true 00:05:44.600 16:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:44.600 16:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.600 Initializing NVMe Controllers 00:05:44.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:44.600 Controller IO queue size 128, less than required. 00:05:44.600 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:44.600 Controller IO queue size 128, less than required. 00:05:44.600 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:44.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:44.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:44.600 Initialization complete. Launching workers. 00:05:44.600 ======================================================== 00:05:44.600 Latency(us) 00:05:44.600 Device Information : IOPS MiB/s Average min max 00:05:44.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2457.14 1.20 33696.15 2226.03 1190409.63 00:05:44.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16717.19 8.16 7656.87 1572.87 299642.59 00:05:44.600 ======================================================== 00:05:44.600 Total : 19174.34 9.36 10993.74 1572.87 1190409.63 00:05:44.600 00:05:44.858 16:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.858 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:44.858 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:45.116 true 00:05:45.116 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1747078 00:05:45.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1747078) - No such process 00:05:45.116 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1747078 00:05:45.116 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.374 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.631 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:45.631 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:45.631 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:45.631 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.632 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:45.632 null0 00:05:45.632 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:45.632 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.632 16:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:45.889 null1 00:05:45.889 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:45.889 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.889 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:46.147 null2 00:05:46.147 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.147 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.147 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:46.147 null3 00:05:46.404 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.404 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.404 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:46.404 null4 00:05:46.404 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.404 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.404 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:46.663 null5 00:05:46.663 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.663 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.663 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:46.922 null6 00:05:46.922 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.922 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.922 16:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:46.922 null7 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.180 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1752694 1752695 1752697 1752700 1752701 1752703 1752705 1752707 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.181 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.439 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.697 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.697 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.697 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.697 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.697 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.697 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.697 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.697 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.954 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.954 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.954 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.954 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.954 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.954 16:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.954 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.211 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.468 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.468 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.468 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.468 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.468 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.468 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.468 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.468 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.468 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.468 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.469 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.727 16:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.985 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.985 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.985 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.985 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.985 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.985 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.985 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.985 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.985 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.985 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.985 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.242 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.243 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.243 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.243 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.243 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.243 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.243 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.501 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.759 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.759 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.759 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.759 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.759 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.759 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.759 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.759 16:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.017 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.017 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.017 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.017 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.017 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.017 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.018 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.276 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.535 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.535 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.535 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.535 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.535 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.535 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.535 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.535 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.794 16:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:51.053 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.053 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.053 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.053 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.053 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:51.053 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.053 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.053 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:51.312 rmmod nvme_tcp 00:05:51.312 rmmod nvme_fabrics 00:05:51.312 rmmod nvme_keyring 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1746735 ']' 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1746735 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1746735 ']' 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1746735 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1746735 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1746735' 00:05:51.312 killing process with pid 1746735 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1746735 00:05:51.312 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1746735 00:05:51.572 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:51.572 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:51.572 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:51.572 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:51.572 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:51.572 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:51.572 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:51.572 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:51.572 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:51.572 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.572 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.572 16:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:54.110 00:05:54.110 real 0m47.584s 00:05:54.110 user 3m13.270s 00:05:54.110 sys 0m15.462s 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:54.110 ************************************ 00:05:54.110 END TEST nvmf_ns_hotplug_stress 00:05:54.110 ************************************ 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:54.110 ************************************ 00:05:54.110 START TEST nvmf_delete_subsystem 00:05:54.110 ************************************ 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:54.110 * Looking for test storage... 00:05:54.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.110 --rc genhtml_branch_coverage=1 00:05:54.110 --rc genhtml_function_coverage=1 00:05:54.110 --rc genhtml_legend=1 00:05:54.110 --rc geninfo_all_blocks=1 00:05:54.110 --rc geninfo_unexecuted_blocks=1 00:05:54.110 00:05:54.110 ' 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.110 --rc genhtml_branch_coverage=1 00:05:54.110 --rc genhtml_function_coverage=1 00:05:54.110 --rc genhtml_legend=1 00:05:54.110 --rc geninfo_all_blocks=1 00:05:54.110 --rc geninfo_unexecuted_blocks=1 00:05:54.110 00:05:54.110 ' 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.110 --rc genhtml_branch_coverage=1 00:05:54.110 --rc genhtml_function_coverage=1 00:05:54.110 --rc genhtml_legend=1 00:05:54.110 --rc geninfo_all_blocks=1 00:05:54.110 --rc geninfo_unexecuted_blocks=1 00:05:54.110 00:05:54.110 ' 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.110 --rc genhtml_branch_coverage=1 00:05:54.110 --rc genhtml_function_coverage=1 00:05:54.110 --rc genhtml_legend=1 00:05:54.110 --rc geninfo_all_blocks=1 00:05:54.110 --rc geninfo_unexecuted_blocks=1 00:05:54.110 00:05:54.110 ' 00:05:54.110 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:54.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:54.111 16:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:54.111 16:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:54.111 16:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:54.111 16:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:54.111 16:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:54.111 16:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:54.111 16:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:54.111 16:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:54.111 16:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:54.111 16:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.111 16:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:54.111 16:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:54.111 16:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:54.111 16:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:00.692 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:00.692 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:00.692 Found net devices under 0000:86:00.0: cvl_0_0 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:00.692 Found net devices under 0000:86:00.1: cvl_0_1 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:00.692 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:00.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:00.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:06:00.693 00:06:00.693 --- 10.0.0.2 ping statistics --- 00:06:00.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.693 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:00.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:00.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:06:00.693 00:06:00.693 --- 10.0.0.1 ping statistics --- 00:06:00.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.693 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:00.693 16:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1757085 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1757085 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1757085 ']' 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.693 [2024-11-20 16:07:31.084978] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:06:00.693 [2024-11-20 16:07:31.085023] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.693 [2024-11-20 16:07:31.163532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.693 [2024-11-20 16:07:31.202505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:00.693 [2024-11-20 16:07:31.202541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:00.693 [2024-11-20 16:07:31.202548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.693 [2024-11-20 16:07:31.202554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.693 [2024-11-20 16:07:31.202559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:00.693 [2024-11-20 16:07:31.203766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.693 [2024-11-20 16:07:31.203767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.693 [2024-11-20 16:07:31.352971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.693 [2024-11-20 16:07:31.369166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.693 NULL1 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.693 Delay0 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1757118 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:00.693 16:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:00.693 [2024-11-20 16:07:31.473955] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:02.594 16:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:02.594 16:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.594 16:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Write completed with error (sct=0, sc=8) 00:06:02.594 starting I/O failed: -6 00:06:02.594 Write completed with error (sct=0, sc=8) 00:06:02.594 Write completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 starting I/O failed: -6 00:06:02.594 Write completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Write completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 starting I/O failed: -6 00:06:02.594 Write completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 starting I/O failed: -6 00:06:02.594 Write completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Write completed with error (sct=0, sc=8) 00:06:02.594 starting I/O failed: -6 00:06:02.594 Write completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 starting I/O failed: -6 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 starting I/O failed: -6 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 starting I/O failed: -6 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.594 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 starting I/O failed: -6 00:06:02.595 starting I/O failed: -6 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 [2024-11-20 16:07:33.718827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc194a0 is same with the state(6) to be set 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 starting I/O failed: -6 00:06:02.595 [2024-11-20 16:07:33.722546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb970000c40 is same with the state(6) to be set 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Write completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.595 Read completed with error (sct=0, sc=8) 00:06:02.596 Read completed with error (sct=0, sc=8) 00:06:02.596 Read completed with error (sct=0, sc=8) 00:06:02.596 Write completed with error (sct=0, sc=8) 00:06:02.596 Write completed with error (sct=0, sc=8) 00:06:02.596 Read completed with error (sct=0, sc=8) 00:06:03.531 [2024-11-20 16:07:34.691982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a9a0 is same with the state(6) to be set 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Write completed with error (sct=0, sc=8) 00:06:03.531 Write completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Write completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Write completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Write completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Write completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Write completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Read completed with error (sct=0, sc=8) 00:06:03.531 Write completed with error (sct=0, sc=8) 00:06:03.531 Write completed with error (sct=0, sc=8) 00:06:03.531 Write completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 [2024-11-20 16:07:34.723046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19860 is same with the state(6) to be set 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 [2024-11-20 16:07:34.723338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc192c0 is same with the state(6) to be set 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 [2024-11-20 16:07:34.725227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb97000d350 is same with the state(6) to be set 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 Write completed with error (sct=0, sc=8) 00:06:03.532 Read completed with error (sct=0, sc=8) 00:06:03.532 [2024-11-20 16:07:34.725685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb97000d020 is same with the state(6) to be set 00:06:03.532 Initializing NVMe Controllers 00:06:03.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:03.532 Controller IO queue size 128, less than required. 00:06:03.532 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:03.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:03.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:03.532 Initialization complete. Launching workers. 00:06:03.532 ======================================================== 00:06:03.532 Latency(us) 00:06:03.532 Device Information : IOPS MiB/s Average min max 00:06:03.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.74 0.09 901552.76 343.15 1007175.94 00:06:03.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.33 0.08 907159.20 266.71 1009395.19 00:06:03.532 ======================================================== 00:06:03.532 Total : 351.08 0.17 904177.05 266.71 1009395.19 00:06:03.532 00:06:03.532 [2024-11-20 16:07:34.726160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1a9a0 (9): Bad file descriptor 00:06:03.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:03.532 16:07:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.532 16:07:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:03.532 16:07:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1757118 00:06:03.532 16:07:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1757118 00:06:04.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1757118) - No such process 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1757118 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1757118 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1757118 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.100 [2024-11-20 16:07:35.251242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.100 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.101 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.101 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.101 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1757809 00:06:04.101 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:04.101 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1757809 00:06:04.101 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:04.101 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:04.360 [2024-11-20 16:07:35.345707] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:04.618 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:04.619 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1757809 00:06:04.619 16:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:05.183 16:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:05.183 16:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1757809 00:06:05.183 16:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:05.747 16:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:05.747 16:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1757809 00:06:05.747 16:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:06.309 16:07:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:06.309 16:07:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1757809 00:06:06.309 16:07:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:06.566 16:07:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:06.566 16:07:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1757809 00:06:06.566 16:07:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:07.132 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:07.132 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1757809 00:06:07.132 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:07.389 Initializing NVMe Controllers 00:06:07.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:07.389 Controller IO queue size 128, less than required. 00:06:07.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:07.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:07.390 Initialization complete. Launching workers. 00:06:07.390 ======================================================== 00:06:07.390 Latency(us) 00:06:07.390 Device Information : IOPS MiB/s Average min max 00:06:07.390 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002352.20 1000159.19 1041394.29 00:06:07.390 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003665.02 1000201.15 1010720.19 00:06:07.390 ======================================================== 00:06:07.390 Total : 256.00 0.12 1003008.61 1000159.19 1041394.29 00:06:07.390 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1757809 00:06:07.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1757809) - No such process 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1757809 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:07.648 rmmod nvme_tcp 00:06:07.648 rmmod nvme_fabrics 00:06:07.648 rmmod nvme_keyring 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1757085 ']' 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1757085 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1757085 ']' 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1757085 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.648 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1757085 00:06:07.907 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.907 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.907 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1757085' 00:06:07.907 killing process with pid 1757085 00:06:07.907 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1757085 00:06:07.907 16:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1757085 00:06:07.907 16:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:07.907 16:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:07.907 16:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:07.907 16:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:07.907 16:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:07.907 16:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:07.907 16:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:07.907 16:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:07.907 16:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:07.907 16:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.907 16:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.907 16:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.443 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:10.443 00:06:10.443 real 0m16.356s 00:06:10.443 user 0m29.602s 00:06:10.443 sys 0m5.465s 00:06:10.443 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.443 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:10.443 ************************************ 00:06:10.443 END TEST nvmf_delete_subsystem 00:06:10.443 ************************************ 00:06:10.443 16:07:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:10.443 16:07:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.443 16:07:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.443 16:07:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.443 ************************************ 00:06:10.443 START TEST nvmf_host_management 00:06:10.443 ************************************ 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:10.444 * Looking for test storage... 00:06:10.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.444 --rc genhtml_branch_coverage=1 00:06:10.444 --rc genhtml_function_coverage=1 00:06:10.444 --rc genhtml_legend=1 00:06:10.444 --rc geninfo_all_blocks=1 00:06:10.444 --rc geninfo_unexecuted_blocks=1 00:06:10.444 00:06:10.444 ' 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.444 --rc genhtml_branch_coverage=1 00:06:10.444 --rc genhtml_function_coverage=1 00:06:10.444 --rc genhtml_legend=1 00:06:10.444 --rc geninfo_all_blocks=1 00:06:10.444 --rc geninfo_unexecuted_blocks=1 00:06:10.444 00:06:10.444 ' 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.444 --rc genhtml_branch_coverage=1 00:06:10.444 --rc genhtml_function_coverage=1 00:06:10.444 --rc genhtml_legend=1 00:06:10.444 --rc geninfo_all_blocks=1 00:06:10.444 --rc geninfo_unexecuted_blocks=1 00:06:10.444 00:06:10.444 ' 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.444 --rc genhtml_branch_coverage=1 00:06:10.444 --rc genhtml_function_coverage=1 00:06:10.444 --rc genhtml_legend=1 00:06:10.444 --rc geninfo_all_blocks=1 00:06:10.444 --rc geninfo_unexecuted_blocks=1 00:06:10.444 00:06:10.444 ' 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.444 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:10.445 16:07:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.018 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:17.019 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:17.019 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:17.019 Found net devices under 0000:86:00.0: cvl_0_0 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:17.019 Found net devices under 0000:86:00.1: cvl_0_1 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:17.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:17.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:06:17.019 00:06:17.019 --- 10.0.0.2 ping statistics --- 00:06:17.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.019 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:17.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:17.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:06:17.019 00:06:17.019 --- 10.0.0.1 ping statistics --- 00:06:17.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.019 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:17.019 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1762033 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1762033 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1762033 ']' 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.020 16:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.020 [2024-11-20 16:07:47.525541] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:06:17.020 [2024-11-20 16:07:47.525586] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.020 [2024-11-20 16:07:47.604265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.020 [2024-11-20 16:07:47.648340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.020 [2024-11-20 16:07:47.648377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.020 [2024-11-20 16:07:47.648385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.020 [2024-11-20 16:07:47.648392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.020 [2024-11-20 16:07:47.648397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.020 [2024-11-20 16:07:47.649892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.020 [2024-11-20 16:07:47.650023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.020 [2024-11-20 16:07:47.650062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.020 [2024-11-20 16:07:47.650063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.278 [2024-11-20 16:07:48.422477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.278 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.278 Malloc0 00:06:17.278 [2024-11-20 16:07:48.501007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1762310 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1762310 /var/tmp/bdevperf.sock 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1762310 ']' 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:17.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:17.536 { 00:06:17.536 "params": { 00:06:17.536 "name": "Nvme$subsystem", 00:06:17.536 "trtype": "$TEST_TRANSPORT", 00:06:17.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:17.536 "adrfam": "ipv4", 00:06:17.536 "trsvcid": "$NVMF_PORT", 00:06:17.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:17.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:17.536 "hdgst": ${hdgst:-false}, 00:06:17.536 "ddgst": ${ddgst:-false} 00:06:17.536 }, 00:06:17.536 "method": "bdev_nvme_attach_controller" 00:06:17.536 } 00:06:17.536 EOF 00:06:17.536 )") 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:17.536 16:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:17.536 "params": { 00:06:17.536 "name": "Nvme0", 00:06:17.536 "trtype": "tcp", 00:06:17.536 "traddr": "10.0.0.2", 00:06:17.536 "adrfam": "ipv4", 00:06:17.536 "trsvcid": "4420", 00:06:17.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:17.536 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:17.536 "hdgst": false, 00:06:17.536 "ddgst": false 00:06:17.536 }, 00:06:17.536 "method": "bdev_nvme_attach_controller" 00:06:17.536 }' 00:06:17.536 [2024-11-20 16:07:48.595490] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:06:17.536 [2024-11-20 16:07:48.595536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762310 ] 00:06:17.536 [2024-11-20 16:07:48.672615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.536 [2024-11-20 16:07:48.713679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.793 Running I/O for 10 seconds... 00:06:18.359 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.359 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:18.359 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:18.359 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.359 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.359 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.359 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:18.359 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:18.359 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:18.359 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=921 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 921 -ge 100 ']' 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.360 [2024-11-20 16:07:49.492847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.492913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.492924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.492932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.492940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.492948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.492956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.492964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.492972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.492980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.492988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.492997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 [2024-11-20 16:07:49.493178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14830b0 is same with the state(6) to be set 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.360 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.360 [2024-11-20 16:07:49.500302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.360 [2024-11-20 16:07:49.500555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.360 [2024-11-20 16:07:49.500563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.500991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.500999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.501005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.501015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.501021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.501029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.501036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.501043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.501049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.501058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.501064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.501072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.501079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.501087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.501093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.501102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.501108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.501115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.501122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.361 [2024-11-20 16:07:49.501129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.361 [2024-11-20 16:07:49.501136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.362 [2024-11-20 16:07:49.501143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.362 [2024-11-20 16:07:49.501150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.362 [2024-11-20 16:07:49.501158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.362 [2024-11-20 16:07:49.501164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.362 [2024-11-20 16:07:49.501172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.362 [2024-11-20 16:07:49.501178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.362 [2024-11-20 16:07:49.501186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.362 [2024-11-20 16:07:49.501194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.362 [2024-11-20 16:07:49.501207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.362 [2024-11-20 16:07:49.501214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.362 [2024-11-20 16:07:49.501221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.362 [2024-11-20 16:07:49.501228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.362 [2024-11-20 16:07:49.501236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.362 [2024-11-20 16:07:49.501243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.362 [2024-11-20 16:07:49.501250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.362 [2024-11-20 16:07:49.501257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.362 [2024-11-20 16:07:49.501265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.362 [2024-11-20 16:07:49.501271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:18.362 [2024-11-20 16:07:49.501297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:06:18.362 [2024-11-20 16:07:49.502199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:18.362 task offset: 0 on job bdev=Nvme0n1 fails 00:06:18.362 00:06:18.362 Latency(us) 00:06:18.362 [2024-11-20T15:07:49.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:18.362 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:18.362 Job: Nvme0n1 ended in about 0.50 seconds with error 00:06:18.362 Verification LBA range: start 0x0 length 0x400 00:06:18.362 Nvme0n1 : 0.50 2031.69 126.98 126.98 0.00 28965.05 1552.58 26588.89 00:06:18.362 [2024-11-20T15:07:49.596Z] =================================================================================================================== 00:06:18.362 [2024-11-20T15:07:49.596Z] Total : 2031.69 126.98 126.98 0.00 28965.05 1552.58 26588.89 00:06:18.362 [2024-11-20 16:07:49.504583] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.362 [2024-11-20 16:07:49.504604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1437500 (9): Bad file descriptor 00:06:18.362 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.362 16:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:18.362 [2024-11-20 16:07:49.516134] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:19.296 16:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1762310 00:06:19.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1762310) - No such process 00:06:19.296 16:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:19.296 16:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:19.296 16:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:19.296 16:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:19.296 16:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:19.296 16:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:19.296 16:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:19.296 16:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:19.296 { 00:06:19.296 "params": { 00:06:19.296 "name": "Nvme$subsystem", 00:06:19.296 "trtype": "$TEST_TRANSPORT", 00:06:19.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:19.296 "adrfam": "ipv4", 00:06:19.296 "trsvcid": "$NVMF_PORT", 00:06:19.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:19.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:19.296 "hdgst": ${hdgst:-false}, 00:06:19.296 "ddgst": ${ddgst:-false} 00:06:19.296 }, 00:06:19.296 "method": "bdev_nvme_attach_controller" 00:06:19.296 } 00:06:19.296 EOF 00:06:19.296 )") 00:06:19.296 16:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:19.296 16:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:19.296 16:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:19.553 16:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:19.553 "params": { 00:06:19.553 "name": "Nvme0", 00:06:19.553 "trtype": "tcp", 00:06:19.553 "traddr": "10.0.0.2", 00:06:19.553 "adrfam": "ipv4", 00:06:19.553 "trsvcid": "4420", 00:06:19.554 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:19.554 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:19.554 "hdgst": false, 00:06:19.554 "ddgst": false 00:06:19.554 }, 00:06:19.554 "method": "bdev_nvme_attach_controller" 00:06:19.554 }' 00:06:19.554 [2024-11-20 16:07:50.560571] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:06:19.554 [2024-11-20 16:07:50.560621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762558 ] 00:06:19.554 [2024-11-20 16:07:50.635079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.554 [2024-11-20 16:07:50.676329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.811 Running I/O for 1 seconds... 00:06:20.746 1984.00 IOPS, 124.00 MiB/s 00:06:20.746 Latency(us) 00:06:20.746 [2024-11-20T15:07:51.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:20.746 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:20.746 Verification LBA range: start 0x0 length 0x400 00:06:20.746 Nvme0n1 : 1.02 2007.34 125.46 0.00 0.00 31394.42 5492.54 26713.72 00:06:20.746 [2024-11-20T15:07:51.980Z] =================================================================================================================== 00:06:20.746 [2024-11-20T15:07:51.980Z] Total : 2007.34 125.46 0.00 0.00 31394.42 5492.54 26713.72 00:06:21.004 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:21.004 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:21.004 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:21.004 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:21.005 rmmod nvme_tcp 00:06:21.005 rmmod nvme_fabrics 00:06:21.005 rmmod nvme_keyring 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1762033 ']' 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1762033 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1762033 ']' 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1762033 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762033 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762033' 00:06:21.005 killing process with pid 1762033 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1762033 00:06:21.005 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1762033 00:06:21.264 [2024-11-20 16:07:52.327256] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:21.264 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:21.264 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:21.264 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:21.264 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:21.264 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:21.264 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:21.264 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:21.264 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:21.264 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:21.264 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.264 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:21.264 16:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:23.797 00:06:23.797 real 0m13.199s 00:06:23.797 user 0m23.057s 00:06:23.797 sys 0m5.650s 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.797 ************************************ 00:06:23.797 END TEST nvmf_host_management 00:06:23.797 ************************************ 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:23.797 ************************************ 00:06:23.797 START TEST nvmf_lvol 00:06:23.797 ************************************ 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:23.797 * Looking for test storage... 00:06:23.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.797 --rc genhtml_branch_coverage=1 00:06:23.797 --rc genhtml_function_coverage=1 00:06:23.797 --rc genhtml_legend=1 00:06:23.797 --rc geninfo_all_blocks=1 00:06:23.797 --rc geninfo_unexecuted_blocks=1 00:06:23.797 00:06:23.797 ' 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.797 --rc genhtml_branch_coverage=1 00:06:23.797 --rc genhtml_function_coverage=1 00:06:23.797 --rc genhtml_legend=1 00:06:23.797 --rc geninfo_all_blocks=1 00:06:23.797 --rc geninfo_unexecuted_blocks=1 00:06:23.797 00:06:23.797 ' 00:06:23.797 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.798 --rc genhtml_branch_coverage=1 00:06:23.798 --rc genhtml_function_coverage=1 00:06:23.798 --rc genhtml_legend=1 00:06:23.798 --rc geninfo_all_blocks=1 00:06:23.798 --rc geninfo_unexecuted_blocks=1 00:06:23.798 00:06:23.798 ' 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.798 --rc genhtml_branch_coverage=1 00:06:23.798 --rc genhtml_function_coverage=1 00:06:23.798 --rc genhtml_legend=1 00:06:23.798 --rc geninfo_all_blocks=1 00:06:23.798 --rc geninfo_unexecuted_blocks=1 00:06:23.798 00:06:23.798 ' 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:23.798 16:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:30.364 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:30.364 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.364 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:30.365 Found net devices under 0000:86:00.0: cvl_0_0 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:30.365 Found net devices under 0000:86:00.1: cvl_0_1 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:30.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:30.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:06:30.365 00:06:30.365 --- 10.0.0.2 ping statistics --- 00:06:30.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.365 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:30.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:30.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:06:30.365 00:06:30.365 --- 10.0.0.1 ping statistics --- 00:06:30.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.365 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1766358 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1766358 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1766358 ']' 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.365 16:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:30.365 [2024-11-20 16:08:00.808333] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:06:30.365 [2024-11-20 16:08:00.808387] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.365 [2024-11-20 16:08:00.890696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.365 [2024-11-20 16:08:00.931273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:30.365 [2024-11-20 16:08:00.931311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:30.365 [2024-11-20 16:08:00.931317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:30.365 [2024-11-20 16:08:00.931323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:30.365 [2024-11-20 16:08:00.931327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:30.365 [2024-11-20 16:08:00.932642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.365 [2024-11-20 16:08:00.932748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.365 [2024-11-20 16:08:00.932750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.365 16:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.365 16:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:30.365 16:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:30.365 16:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.365 16:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:30.365 16:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:30.365 16:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:30.365 [2024-11-20 16:08:01.242978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.365 16:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:30.365 16:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:30.365 16:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:30.623 16:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:30.623 16:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:30.882 16:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:30.882 16:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=52a038d5-13de-4bb5-8976-ab5fb28a288e 00:06:30.882 16:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 52a038d5-13de-4bb5-8976-ab5fb28a288e lvol 20 00:06:31.141 16:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2db24975-2fd6-4a09-855d-884f02e187e7 00:06:31.141 16:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:31.399 16:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2db24975-2fd6-4a09-855d-884f02e187e7 00:06:31.659 16:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:31.659 [2024-11-20 16:08:02.868558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.917 16:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:31.917 16:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1766829 00:06:31.917 16:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:31.917 16:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:33.292 16:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2db24975-2fd6-4a09-855d-884f02e187e7 MY_SNAPSHOT 00:06:33.292 16:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a17a9cbd-95dc-4fd1-bfef-8c5ed8107813 00:06:33.292 16:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2db24975-2fd6-4a09-855d-884f02e187e7 30 00:06:33.551 16:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a17a9cbd-95dc-4fd1-bfef-8c5ed8107813 MY_CLONE 00:06:33.809 16:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0aef985b-9e45-42d5-a1b6-f682cbb7e2cf 00:06:33.809 16:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0aef985b-9e45-42d5-a1b6-f682cbb7e2cf 00:06:34.379 16:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1766829 00:06:42.491 Initializing NVMe Controllers 00:06:42.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:42.491 Controller IO queue size 128, less than required. 00:06:42.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:42.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:42.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:42.491 Initialization complete. Launching workers. 00:06:42.491 ======================================================== 00:06:42.491 Latency(us) 00:06:42.491 Device Information : IOPS MiB/s Average min max 00:06:42.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11944.90 46.66 10715.48 1583.04 48768.97 00:06:42.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12147.80 47.45 10537.91 3514.58 56883.03 00:06:42.491 ======================================================== 00:06:42.491 Total : 24092.70 94.11 10625.94 1583.04 56883.03 00:06:42.491 00:06:42.491 16:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:42.491 16:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2db24975-2fd6-4a09-855d-884f02e187e7 00:06:42.749 16:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 52a038d5-13de-4bb5-8976-ab5fb28a288e 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:43.008 rmmod nvme_tcp 00:06:43.008 rmmod nvme_fabrics 00:06:43.008 rmmod nvme_keyring 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1766358 ']' 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1766358 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1766358 ']' 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1766358 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1766358 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1766358' 00:06:43.008 killing process with pid 1766358 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1766358 00:06:43.008 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1766358 00:06:43.280 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:43.280 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:43.280 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:43.280 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:43.280 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:43.280 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:43.280 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:43.280 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:43.280 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:43.280 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.280 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.280 16:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.287 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:45.287 00:06:45.287 real 0m21.929s 00:06:45.287 user 1m2.954s 00:06:45.287 sys 0m7.588s 00:06:45.287 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.287 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.287 ************************************ 00:06:45.287 END TEST nvmf_lvol 00:06:45.287 ************************************ 00:06:45.287 16:08:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:45.287 16:08:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.287 16:08:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.287 16:08:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:45.287 ************************************ 00:06:45.287 START TEST nvmf_lvs_grow 00:06:45.287 ************************************ 00:06:45.287 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:45.547 * Looking for test storage... 00:06:45.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.547 --rc genhtml_branch_coverage=1 00:06:45.547 --rc genhtml_function_coverage=1 00:06:45.547 --rc genhtml_legend=1 00:06:45.547 --rc geninfo_all_blocks=1 00:06:45.547 --rc geninfo_unexecuted_blocks=1 00:06:45.547 00:06:45.547 ' 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.547 --rc genhtml_branch_coverage=1 00:06:45.547 --rc genhtml_function_coverage=1 00:06:45.547 --rc genhtml_legend=1 00:06:45.547 --rc geninfo_all_blocks=1 00:06:45.547 --rc geninfo_unexecuted_blocks=1 00:06:45.547 00:06:45.547 ' 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.547 --rc genhtml_branch_coverage=1 00:06:45.547 --rc genhtml_function_coverage=1 00:06:45.547 --rc genhtml_legend=1 00:06:45.547 --rc geninfo_all_blocks=1 00:06:45.547 --rc geninfo_unexecuted_blocks=1 00:06:45.547 00:06:45.547 ' 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.547 --rc genhtml_branch_coverage=1 00:06:45.547 --rc genhtml_function_coverage=1 00:06:45.547 --rc genhtml_legend=1 00:06:45.547 --rc geninfo_all_blocks=1 00:06:45.547 --rc geninfo_unexecuted_blocks=1 00:06:45.547 00:06:45.547 ' 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.547 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:45.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:45.548 16:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:52.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:52.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:52.119 Found net devices under 0000:86:00.0: cvl_0_0 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:52.119 Found net devices under 0000:86:00.1: cvl_0_1 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.119 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:52.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:06:52.120 00:06:52.120 --- 10.0.0.2 ping statistics --- 00:06:52.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.120 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:06:52.120 00:06:52.120 --- 10.0.0.1 ping statistics --- 00:06:52.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.120 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1772214 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1772214 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1772214 ']' 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.120 16:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.120 [2024-11-20 16:08:22.817036] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:06:52.120 [2024-11-20 16:08:22.817083] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.120 [2024-11-20 16:08:22.898309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.120 [2024-11-20 16:08:22.936942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.120 [2024-11-20 16:08:22.936975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.120 [2024-11-20 16:08:22.936981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.120 [2024-11-20 16:08:22.936987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.120 [2024-11-20 16:08:22.936992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.120 [2024-11-20 16:08:22.937555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.689 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.689 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:52.689 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:52.689 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:52.689 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.689 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.689 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:52.689 [2024-11-20 16:08:23.852383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.689 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:52.689 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.689 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.689 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.689 ************************************ 00:06:52.689 START TEST lvs_grow_clean 00:06:52.689 ************************************ 00:06:52.689 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:52.948 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:52.948 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:52.948 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:52.948 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:52.948 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:52.948 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:52.948 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:52.948 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:52.948 16:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:52.948 16:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:52.948 16:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:53.207 16:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=056e8cfa-c38d-4c37-8810-7a7b890dbf25 00:06:53.207 16:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 056e8cfa-c38d-4c37-8810-7a7b890dbf25 00:06:53.207 16:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:53.466 16:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:53.466 16:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:53.466 16:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 056e8cfa-c38d-4c37-8810-7a7b890dbf25 lvol 150 00:06:53.466 16:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6c796ec9-8153-4f70-b148-db001a116c0c 00:06:53.466 16:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:53.726 16:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:53.726 [2024-11-20 16:08:24.859070] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:53.726 [2024-11-20 16:08:24.859118] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:53.726 true 00:06:53.726 16:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 056e8cfa-c38d-4c37-8810-7a7b890dbf25 00:06:53.726 16:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:53.985 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:53.986 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:54.245 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6c796ec9-8153-4f70-b148-db001a116c0c 00:06:54.245 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:54.505 [2024-11-20 16:08:25.605334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.505 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:54.764 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1772750 00:06:54.764 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:54.764 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:54.764 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1772750 /var/tmp/bdevperf.sock 00:06:54.764 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1772750 ']' 00:06:54.764 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:54.764 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.764 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:54.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:54.764 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.764 16:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:54.764 [2024-11-20 16:08:25.845095] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:06:54.764 [2024-11-20 16:08:25.845144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772750 ] 00:06:54.764 [2024-11-20 16:08:25.920503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.764 [2024-11-20 16:08:25.962018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.700 16:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.700 16:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:55.700 16:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:55.958 Nvme0n1 00:06:55.958 16:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:56.216 [ 00:06:56.216 { 00:06:56.216 "name": "Nvme0n1", 00:06:56.216 "aliases": [ 00:06:56.216 "6c796ec9-8153-4f70-b148-db001a116c0c" 00:06:56.216 ], 00:06:56.216 "product_name": "NVMe disk", 00:06:56.216 "block_size": 4096, 00:06:56.216 "num_blocks": 38912, 00:06:56.216 "uuid": "6c796ec9-8153-4f70-b148-db001a116c0c", 00:06:56.216 "numa_id": 1, 00:06:56.216 "assigned_rate_limits": { 00:06:56.216 "rw_ios_per_sec": 0, 00:06:56.216 "rw_mbytes_per_sec": 0, 00:06:56.216 "r_mbytes_per_sec": 0, 00:06:56.216 "w_mbytes_per_sec": 0 00:06:56.216 }, 00:06:56.216 "claimed": false, 00:06:56.216 "zoned": false, 00:06:56.216 "supported_io_types": { 00:06:56.216 "read": true, 00:06:56.216 "write": true, 00:06:56.216 "unmap": true, 00:06:56.216 "flush": true, 00:06:56.216 "reset": true, 00:06:56.216 "nvme_admin": true, 00:06:56.216 "nvme_io": true, 00:06:56.216 "nvme_io_md": false, 00:06:56.216 "write_zeroes": true, 00:06:56.216 "zcopy": false, 00:06:56.216 "get_zone_info": false, 00:06:56.216 "zone_management": false, 00:06:56.216 "zone_append": false, 00:06:56.216 "compare": true, 00:06:56.216 "compare_and_write": true, 00:06:56.216 "abort": true, 00:06:56.216 "seek_hole": false, 00:06:56.216 "seek_data": false, 00:06:56.216 "copy": true, 00:06:56.216 "nvme_iov_md": false 00:06:56.216 }, 00:06:56.216 "memory_domains": [ 00:06:56.216 { 00:06:56.216 "dma_device_id": "system", 00:06:56.216 "dma_device_type": 1 00:06:56.216 } 00:06:56.216 ], 00:06:56.216 "driver_specific": { 00:06:56.216 "nvme": [ 00:06:56.216 { 00:06:56.216 "trid": { 00:06:56.216 "trtype": "TCP", 00:06:56.216 "adrfam": "IPv4", 00:06:56.216 "traddr": "10.0.0.2", 00:06:56.216 "trsvcid": "4420", 00:06:56.216 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:56.216 }, 00:06:56.216 "ctrlr_data": { 00:06:56.216 "cntlid": 1, 00:06:56.216 "vendor_id": "0x8086", 00:06:56.216 "model_number": "SPDK bdev Controller", 00:06:56.216 "serial_number": "SPDK0", 00:06:56.216 "firmware_revision": "25.01", 00:06:56.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:56.216 "oacs": { 00:06:56.216 "security": 0, 00:06:56.216 "format": 0, 00:06:56.216 "firmware": 0, 00:06:56.216 "ns_manage": 0 00:06:56.216 }, 00:06:56.216 "multi_ctrlr": true, 00:06:56.216 "ana_reporting": false 00:06:56.216 }, 00:06:56.216 "vs": { 00:06:56.217 "nvme_version": "1.3" 00:06:56.217 }, 00:06:56.217 "ns_data": { 00:06:56.217 "id": 1, 00:06:56.217 "can_share": true 00:06:56.217 } 00:06:56.217 } 00:06:56.217 ], 00:06:56.217 "mp_policy": "active_passive" 00:06:56.217 } 00:06:56.217 } 00:06:56.217 ] 00:06:56.217 16:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1773078 00:06:56.217 16:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:56.217 16:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:56.217 Running I/O for 10 seconds... 00:06:57.590 Latency(us) 00:06:57.590 [2024-11-20T15:08:28.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.590 Nvme0n1 : 1.00 23375.00 91.31 0.00 0.00 0.00 0.00 0.00 00:06:57.590 [2024-11-20T15:08:28.824Z] =================================================================================================================== 00:06:57.590 [2024-11-20T15:08:28.824Z] Total : 23375.00 91.31 0.00 0.00 0.00 0.00 0.00 00:06:57.590 00:06:58.152 16:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 056e8cfa-c38d-4c37-8810-7a7b890dbf25 00:06:58.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.410 Nvme0n1 : 2.00 23515.50 91.86 0.00 0.00 0.00 0.00 0.00 00:06:58.410 [2024-11-20T15:08:29.644Z] =================================================================================================================== 00:06:58.410 [2024-11-20T15:08:29.644Z] Total : 23515.50 91.86 0.00 0.00 0.00 0.00 0.00 00:06:58.410 00:06:58.410 true 00:06:58.410 16:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 056e8cfa-c38d-4c37-8810-7a7b890dbf25 00:06:58.410 16:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:58.668 16:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:58.668 16:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:58.668 16:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1773078 00:06:59.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.236 Nvme0n1 : 3.00 23418.67 91.48 0.00 0.00 0.00 0.00 0.00 00:06:59.236 [2024-11-20T15:08:30.470Z] =================================================================================================================== 00:06:59.236 [2024-11-20T15:08:30.470Z] Total : 23418.67 91.48 0.00 0.00 0.00 0.00 0.00 00:06:59.236 00:07:00.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.610 Nvme0n1 : 4.00 23469.50 91.68 0.00 0.00 0.00 0.00 0.00 00:07:00.610 [2024-11-20T15:08:31.844Z] =================================================================================================================== 00:07:00.610 [2024-11-20T15:08:31.844Z] Total : 23469.50 91.68 0.00 0.00 0.00 0.00 0.00 00:07:00.610 00:07:01.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.545 Nvme0n1 : 5.00 23564.80 92.05 0.00 0.00 0.00 0.00 0.00 00:07:01.545 [2024-11-20T15:08:32.779Z] =================================================================================================================== 00:07:01.545 [2024-11-20T15:08:32.779Z] Total : 23564.80 92.05 0.00 0.00 0.00 0.00 0.00 00:07:01.545 00:07:02.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.481 Nvme0n1 : 6.00 23628.00 92.30 0.00 0.00 0.00 0.00 0.00 00:07:02.481 [2024-11-20T15:08:33.715Z] =================================================================================================================== 00:07:02.481 [2024-11-20T15:08:33.715Z] Total : 23628.00 92.30 0.00 0.00 0.00 0.00 0.00 00:07:02.481 00:07:03.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.418 Nvme0n1 : 7.00 23672.71 92.47 0.00 0.00 0.00 0.00 0.00 00:07:03.418 [2024-11-20T15:08:34.652Z] =================================================================================================================== 00:07:03.418 [2024-11-20T15:08:34.652Z] Total : 23672.71 92.47 0.00 0.00 0.00 0.00 0.00 00:07:03.418 00:07:04.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.355 Nvme0n1 : 8.00 23705.25 92.60 0.00 0.00 0.00 0.00 0.00 00:07:04.355 [2024-11-20T15:08:35.589Z] =================================================================================================================== 00:07:04.355 [2024-11-20T15:08:35.589Z] Total : 23705.25 92.60 0.00 0.00 0.00 0.00 0.00 00:07:04.355 00:07:05.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.291 Nvme0n1 : 9.00 23729.33 92.69 0.00 0.00 0.00 0.00 0.00 00:07:05.291 [2024-11-20T15:08:36.525Z] =================================================================================================================== 00:07:05.291 [2024-11-20T15:08:36.525Z] Total : 23729.33 92.69 0.00 0.00 0.00 0.00 0.00 00:07:05.291 00:07:06.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.228 Nvme0n1 : 10.00 23758.40 92.81 0.00 0.00 0.00 0.00 0.00 00:07:06.228 [2024-11-20T15:08:37.462Z] =================================================================================================================== 00:07:06.228 [2024-11-20T15:08:37.462Z] Total : 23758.40 92.81 0.00 0.00 0.00 0.00 0.00 00:07:06.228 00:07:06.228 00:07:06.228 Latency(us) 00:07:06.228 [2024-11-20T15:08:37.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.228 Nvme0n1 : 10.00 23758.26 92.81 0.00 0.00 5384.30 3167.57 11858.90 00:07:06.228 [2024-11-20T15:08:37.462Z] =================================================================================================================== 00:07:06.228 [2024-11-20T15:08:37.462Z] Total : 23758.26 92.81 0.00 0.00 5384.30 3167.57 11858.90 00:07:06.228 { 00:07:06.228 "results": [ 00:07:06.228 { 00:07:06.228 "job": "Nvme0n1", 00:07:06.228 "core_mask": "0x2", 00:07:06.228 "workload": "randwrite", 00:07:06.228 "status": "finished", 00:07:06.228 "queue_depth": 128, 00:07:06.228 "io_size": 4096, 00:07:06.228 "runtime": 10.002754, 00:07:06.228 "iops": 23758.256976028802, 00:07:06.228 "mibps": 92.80569131261251, 00:07:06.228 "io_failed": 0, 00:07:06.228 "io_timeout": 0, 00:07:06.228 "avg_latency_us": 5384.2990444130255, 00:07:06.228 "min_latency_us": 3167.5733333333333, 00:07:06.228 "max_latency_us": 11858.895238095238 00:07:06.228 } 00:07:06.228 ], 00:07:06.228 "core_count": 1 00:07:06.228 } 00:07:06.487 16:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1772750 00:07:06.487 16:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1772750 ']' 00:07:06.487 16:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1772750 00:07:06.487 16:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:06.487 16:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.487 16:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1772750 00:07:06.487 16:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:06.487 16:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:06.487 16:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1772750' 00:07:06.487 killing process with pid 1772750 00:07:06.487 16:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1772750 00:07:06.487 Received shutdown signal, test time was about 10.000000 seconds 00:07:06.487 00:07:06.487 Latency(us) 00:07:06.487 [2024-11-20T15:08:37.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.487 [2024-11-20T15:08:37.721Z] =================================================================================================================== 00:07:06.487 [2024-11-20T15:08:37.721Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:06.487 16:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1772750 00:07:06.487 16:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:06.746 16:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:07.004 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 056e8cfa-c38d-4c37-8810-7a7b890dbf25 00:07:07.004 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:07.263 [2024-11-20 16:08:38.440769] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 056e8cfa-c38d-4c37-8810-7a7b890dbf25 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 056e8cfa-c38d-4c37-8810-7a7b890dbf25 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:07.263 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 056e8cfa-c38d-4c37-8810-7a7b890dbf25 00:07:07.521 request: 00:07:07.521 { 00:07:07.521 "uuid": "056e8cfa-c38d-4c37-8810-7a7b890dbf25", 00:07:07.521 "method": "bdev_lvol_get_lvstores", 00:07:07.521 "req_id": 1 00:07:07.521 } 00:07:07.521 Got JSON-RPC error response 00:07:07.521 response: 00:07:07.521 { 00:07:07.521 "code": -19, 00:07:07.521 "message": "No such device" 00:07:07.521 } 00:07:07.521 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:07.521 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:07.521 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:07.522 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:07.522 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:07.780 aio_bdev 00:07:07.780 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6c796ec9-8153-4f70-b148-db001a116c0c 00:07:07.780 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6c796ec9-8153-4f70-b148-db001a116c0c 00:07:07.780 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:07.780 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:07.780 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:07.780 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:07.780 16:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:08.039 16:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6c796ec9-8153-4f70-b148-db001a116c0c -t 2000 00:07:08.039 [ 00:07:08.039 { 00:07:08.039 "name": "6c796ec9-8153-4f70-b148-db001a116c0c", 00:07:08.039 "aliases": [ 00:07:08.039 "lvs/lvol" 00:07:08.039 ], 00:07:08.039 "product_name": "Logical Volume", 00:07:08.039 "block_size": 4096, 00:07:08.039 "num_blocks": 38912, 00:07:08.039 "uuid": "6c796ec9-8153-4f70-b148-db001a116c0c", 00:07:08.039 "assigned_rate_limits": { 00:07:08.039 "rw_ios_per_sec": 0, 00:07:08.039 "rw_mbytes_per_sec": 0, 00:07:08.039 "r_mbytes_per_sec": 0, 00:07:08.039 "w_mbytes_per_sec": 0 00:07:08.039 }, 00:07:08.039 "claimed": false, 00:07:08.039 "zoned": false, 00:07:08.039 "supported_io_types": { 00:07:08.039 "read": true, 00:07:08.039 "write": true, 00:07:08.039 "unmap": true, 00:07:08.039 "flush": false, 00:07:08.039 "reset": true, 00:07:08.039 "nvme_admin": false, 00:07:08.039 "nvme_io": false, 00:07:08.039 "nvme_io_md": false, 00:07:08.039 "write_zeroes": true, 00:07:08.039 "zcopy": false, 00:07:08.039 "get_zone_info": false, 00:07:08.039 "zone_management": false, 00:07:08.039 "zone_append": false, 00:07:08.039 "compare": false, 00:07:08.039 "compare_and_write": false, 00:07:08.039 "abort": false, 00:07:08.039 "seek_hole": true, 00:07:08.039 "seek_data": true, 00:07:08.039 "copy": false, 00:07:08.039 "nvme_iov_md": false 00:07:08.039 }, 00:07:08.039 "driver_specific": { 00:07:08.039 "lvol": { 00:07:08.039 "lvol_store_uuid": "056e8cfa-c38d-4c37-8810-7a7b890dbf25", 00:07:08.039 "base_bdev": "aio_bdev", 00:07:08.039 "thin_provision": false, 00:07:08.039 "num_allocated_clusters": 38, 00:07:08.039 "snapshot": false, 00:07:08.039 "clone": false, 00:07:08.039 "esnap_clone": false 00:07:08.039 } 00:07:08.039 } 00:07:08.039 } 00:07:08.039 ] 00:07:08.039 16:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:08.039 16:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 056e8cfa-c38d-4c37-8810-7a7b890dbf25 00:07:08.039 16:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:08.297 16:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:08.297 16:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 056e8cfa-c38d-4c37-8810-7a7b890dbf25 00:07:08.297 16:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:08.556 16:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:08.556 16:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6c796ec9-8153-4f70-b148-db001a116c0c 00:07:08.556 16:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 056e8cfa-c38d-4c37-8810-7a7b890dbf25 00:07:08.814 16:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:09.073 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.073 00:07:09.073 real 0m16.277s 00:07:09.073 user 0m15.990s 00:07:09.073 sys 0m1.484s 00:07:09.073 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.073 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:09.073 ************************************ 00:07:09.073 END TEST lvs_grow_clean 00:07:09.073 ************************************ 00:07:09.073 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:09.073 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:09.073 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.073 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.073 ************************************ 00:07:09.073 START TEST lvs_grow_dirty 00:07:09.073 ************************************ 00:07:09.073 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:09.073 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:09.074 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:09.074 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:09.074 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:09.074 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:09.074 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:09.074 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.074 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.074 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:09.332 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:09.332 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:09.591 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ea32181f-e34f-477d-a889-3dbae72a016f 00:07:09.591 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea32181f-e34f-477d-a889-3dbae72a016f 00:07:09.591 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:09.850 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:09.850 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:09.850 16:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ea32181f-e34f-477d-a889-3dbae72a016f lvol 150 00:07:09.850 16:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=cc5edb5d-1763-4895-af9d-91a224c10836 00:07:09.850 16:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.850 16:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:10.109 [2024-11-20 16:08:41.246138] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:10.109 [2024-11-20 16:08:41.246188] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:10.109 true 00:07:10.109 16:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea32181f-e34f-477d-a889-3dbae72a016f 00:07:10.109 16:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:10.368 16:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:10.368 16:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:10.626 16:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cc5edb5d-1763-4895-af9d-91a224c10836 00:07:10.626 16:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:10.885 [2024-11-20 16:08:41.988366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.885 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:11.144 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1775564 00:07:11.144 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:11.144 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:11.144 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1775564 /var/tmp/bdevperf.sock 00:07:11.144 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1775564 ']' 00:07:11.144 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:11.144 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.144 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:11.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:11.144 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.144 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:11.144 [2024-11-20 16:08:42.233950] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:07:11.144 [2024-11-20 16:08:42.233996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775564 ] 00:07:11.144 [2024-11-20 16:08:42.304988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.144 [2024-11-20 16:08:42.344797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.402 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.402 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:11.402 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:11.660 Nvme0n1 00:07:11.660 16:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:11.919 [ 00:07:11.919 { 00:07:11.919 "name": "Nvme0n1", 00:07:11.919 "aliases": [ 00:07:11.919 "cc5edb5d-1763-4895-af9d-91a224c10836" 00:07:11.919 ], 00:07:11.919 "product_name": "NVMe disk", 00:07:11.919 "block_size": 4096, 00:07:11.919 "num_blocks": 38912, 00:07:11.919 "uuid": "cc5edb5d-1763-4895-af9d-91a224c10836", 00:07:11.919 "numa_id": 1, 00:07:11.919 "assigned_rate_limits": { 00:07:11.919 "rw_ios_per_sec": 0, 00:07:11.919 "rw_mbytes_per_sec": 0, 00:07:11.919 "r_mbytes_per_sec": 0, 00:07:11.919 "w_mbytes_per_sec": 0 00:07:11.919 }, 00:07:11.919 "claimed": false, 00:07:11.919 "zoned": false, 00:07:11.919 "supported_io_types": { 00:07:11.919 "read": true, 00:07:11.919 "write": true, 00:07:11.919 "unmap": true, 00:07:11.919 "flush": true, 00:07:11.919 "reset": true, 00:07:11.919 "nvme_admin": true, 00:07:11.919 "nvme_io": true, 00:07:11.919 "nvme_io_md": false, 00:07:11.919 "write_zeroes": true, 00:07:11.919 "zcopy": false, 00:07:11.919 "get_zone_info": false, 00:07:11.919 "zone_management": false, 00:07:11.919 "zone_append": false, 00:07:11.919 "compare": true, 00:07:11.919 "compare_and_write": true, 00:07:11.919 "abort": true, 00:07:11.919 "seek_hole": false, 00:07:11.919 "seek_data": false, 00:07:11.919 "copy": true, 00:07:11.919 "nvme_iov_md": false 00:07:11.919 }, 00:07:11.919 "memory_domains": [ 00:07:11.919 { 00:07:11.919 "dma_device_id": "system", 00:07:11.919 "dma_device_type": 1 00:07:11.919 } 00:07:11.919 ], 00:07:11.919 "driver_specific": { 00:07:11.919 "nvme": [ 00:07:11.919 { 00:07:11.919 "trid": { 00:07:11.919 "trtype": "TCP", 00:07:11.919 "adrfam": "IPv4", 00:07:11.919 "traddr": "10.0.0.2", 00:07:11.919 "trsvcid": "4420", 00:07:11.919 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:11.919 }, 00:07:11.919 "ctrlr_data": { 00:07:11.919 "cntlid": 1, 00:07:11.919 "vendor_id": "0x8086", 00:07:11.919 "model_number": "SPDK bdev Controller", 00:07:11.919 "serial_number": "SPDK0", 00:07:11.919 "firmware_revision": "25.01", 00:07:11.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:11.919 "oacs": { 00:07:11.919 "security": 0, 00:07:11.919 "format": 0, 00:07:11.919 "firmware": 0, 00:07:11.919 "ns_manage": 0 00:07:11.919 }, 00:07:11.919 "multi_ctrlr": true, 00:07:11.919 "ana_reporting": false 00:07:11.919 }, 00:07:11.919 "vs": { 00:07:11.919 "nvme_version": "1.3" 00:07:11.919 }, 00:07:11.919 "ns_data": { 00:07:11.919 "id": 1, 00:07:11.919 "can_share": true 00:07:11.919 } 00:07:11.919 } 00:07:11.919 ], 00:07:11.919 "mp_policy": "active_passive" 00:07:11.919 } 00:07:11.919 } 00:07:11.919 ] 00:07:11.919 16:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1775791 00:07:11.919 16:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:11.919 16:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:11.919 Running I/O for 10 seconds... 00:07:13.297 Latency(us) 00:07:13.297 [2024-11-20T15:08:44.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.297 Nvme0n1 : 1.00 23118.00 90.30 0.00 0.00 0.00 0.00 0.00 00:07:13.297 [2024-11-20T15:08:44.531Z] =================================================================================================================== 00:07:13.297 [2024-11-20T15:08:44.531Z] Total : 23118.00 90.30 0.00 0.00 0.00 0.00 0.00 00:07:13.297 00:07:13.865 16:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ea32181f-e34f-477d-a889-3dbae72a016f 00:07:14.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.123 Nvme0n1 : 2.00 23404.50 91.42 0.00 0.00 0.00 0.00 0.00 00:07:14.123 [2024-11-20T15:08:45.357Z] =================================================================================================================== 00:07:14.123 [2024-11-20T15:08:45.357Z] Total : 23404.50 91.42 0.00 0.00 0.00 0.00 0.00 00:07:14.123 00:07:14.123 true 00:07:14.123 16:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:14.123 16:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea32181f-e34f-477d-a889-3dbae72a016f 00:07:14.382 16:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:14.382 16:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:14.382 16:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1775791 00:07:14.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.949 Nvme0n1 : 3.00 23501.00 91.80 0.00 0.00 0.00 0.00 0.00 00:07:14.949 [2024-11-20T15:08:46.183Z] =================================================================================================================== 00:07:14.949 [2024-11-20T15:08:46.183Z] Total : 23501.00 91.80 0.00 0.00 0.00 0.00 0.00 00:07:14.949 00:07:16.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.324 Nvme0n1 : 4.00 23581.50 92.12 0.00 0.00 0.00 0.00 0.00 00:07:16.324 [2024-11-20T15:08:47.558Z] =================================================================================================================== 00:07:16.324 [2024-11-20T15:08:47.558Z] Total : 23581.50 92.12 0.00 0.00 0.00 0.00 0.00 00:07:16.324 00:07:17.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.260 Nvme0n1 : 5.00 23628.00 92.30 0.00 0.00 0.00 0.00 0.00 00:07:17.260 [2024-11-20T15:08:48.494Z] =================================================================================================================== 00:07:17.260 [2024-11-20T15:08:48.494Z] Total : 23628.00 92.30 0.00 0.00 0.00 0.00 0.00 00:07:17.260 00:07:18.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.197 Nvme0n1 : 6.00 23683.33 92.51 0.00 0.00 0.00 0.00 0.00 00:07:18.197 [2024-11-20T15:08:49.431Z] =================================================================================================================== 00:07:18.197 [2024-11-20T15:08:49.431Z] Total : 23683.33 92.51 0.00 0.00 0.00 0.00 0.00 00:07:18.197 00:07:19.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.131 Nvme0n1 : 7.00 23696.71 92.57 0.00 0.00 0.00 0.00 0.00 00:07:19.131 [2024-11-20T15:08:50.365Z] =================================================================================================================== 00:07:19.131 [2024-11-20T15:08:50.365Z] Total : 23696.71 92.57 0.00 0.00 0.00 0.00 0.00 00:07:19.131 00:07:20.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.068 Nvme0n1 : 8.00 23700.50 92.58 0.00 0.00 0.00 0.00 0.00 00:07:20.068 [2024-11-20T15:08:51.302Z] =================================================================================================================== 00:07:20.068 [2024-11-20T15:08:51.302Z] Total : 23700.50 92.58 0.00 0.00 0.00 0.00 0.00 00:07:20.068 00:07:21.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.003 Nvme0n1 : 9.00 23720.56 92.66 0.00 0.00 0.00 0.00 0.00 00:07:21.003 [2024-11-20T15:08:52.237Z] =================================================================================================================== 00:07:21.003 [2024-11-20T15:08:52.237Z] Total : 23720.56 92.66 0.00 0.00 0.00 0.00 0.00 00:07:21.003 00:07:21.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.938 Nvme0n1 : 10.00 23749.60 92.77 0.00 0.00 0.00 0.00 0.00 00:07:21.938 [2024-11-20T15:08:53.172Z] =================================================================================================================== 00:07:21.938 [2024-11-20T15:08:53.172Z] Total : 23749.60 92.77 0.00 0.00 0.00 0.00 0.00 00:07:21.938 00:07:21.938 00:07:21.938 Latency(us) 00:07:21.938 [2024-11-20T15:08:53.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.938 Nvme0n1 : 10.01 23749.79 92.77 0.00 0.00 5386.56 3183.18 12607.88 00:07:21.938 [2024-11-20T15:08:53.173Z] =================================================================================================================== 00:07:21.939 [2024-11-20T15:08:53.173Z] Total : 23749.79 92.77 0.00 0.00 5386.56 3183.18 12607.88 00:07:21.939 { 00:07:21.939 "results": [ 00:07:21.939 { 00:07:21.939 "job": "Nvme0n1", 00:07:21.939 "core_mask": "0x2", 00:07:21.939 "workload": "randwrite", 00:07:21.939 "status": "finished", 00:07:21.939 "queue_depth": 128, 00:07:21.939 "io_size": 4096, 00:07:21.939 "runtime": 10.005311, 00:07:21.939 "iops": 23749.786488396014, 00:07:21.939 "mibps": 92.77260347029693, 00:07:21.939 "io_failed": 0, 00:07:21.939 "io_timeout": 0, 00:07:21.939 "avg_latency_us": 5386.561894565724, 00:07:21.939 "min_latency_us": 3183.177142857143, 00:07:21.939 "max_latency_us": 12607.878095238095 00:07:21.939 } 00:07:21.939 ], 00:07:21.939 "core_count": 1 00:07:21.939 } 00:07:21.939 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1775564 00:07:21.939 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1775564 ']' 00:07:21.939 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1775564 00:07:21.939 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:22.197 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.197 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1775564 00:07:22.197 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:22.198 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:22.198 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1775564' 00:07:22.198 killing process with pid 1775564 00:07:22.198 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1775564 00:07:22.198 Received shutdown signal, test time was about 10.000000 seconds 00:07:22.198 00:07:22.198 Latency(us) 00:07:22.198 [2024-11-20T15:08:53.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.198 [2024-11-20T15:08:53.432Z] =================================================================================================================== 00:07:22.198 [2024-11-20T15:08:53.432Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:22.198 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1775564 00:07:22.198 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:22.456 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:22.715 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea32181f-e34f-477d-a889-3dbae72a016f 00:07:22.715 16:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1772214 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1772214 00:07:22.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1772214 Killed "${NVMF_APP[@]}" "$@" 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1777645 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1777645 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1777645 ']' 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.975 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:22.975 [2024-11-20 16:08:54.124297] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:07:22.975 [2024-11-20 16:08:54.124347] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.975 [2024-11-20 16:08:54.200983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.235 [2024-11-20 16:08:54.243459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.235 [2024-11-20 16:08:54.243494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.235 [2024-11-20 16:08:54.243501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.235 [2024-11-20 16:08:54.243507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.235 [2024-11-20 16:08:54.243512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.235 [2024-11-20 16:08:54.244045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.235 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.235 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:23.235 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:23.235 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:23.235 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:23.235 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.235 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:23.494 [2024-11-20 16:08:54.539051] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:23.494 [2024-11-20 16:08:54.539149] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:23.494 [2024-11-20 16:08:54.539175] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:23.494 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:23.494 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev cc5edb5d-1763-4895-af9d-91a224c10836 00:07:23.494 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=cc5edb5d-1763-4895-af9d-91a224c10836 00:07:23.494 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:23.494 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:23.494 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:23.494 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:23.494 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:23.753 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cc5edb5d-1763-4895-af9d-91a224c10836 -t 2000 00:07:23.753 [ 00:07:23.753 { 00:07:23.753 "name": "cc5edb5d-1763-4895-af9d-91a224c10836", 00:07:23.753 "aliases": [ 00:07:23.753 "lvs/lvol" 00:07:23.753 ], 00:07:23.753 "product_name": "Logical Volume", 00:07:23.753 "block_size": 4096, 00:07:23.753 "num_blocks": 38912, 00:07:23.753 "uuid": "cc5edb5d-1763-4895-af9d-91a224c10836", 00:07:23.753 "assigned_rate_limits": { 00:07:23.753 "rw_ios_per_sec": 0, 00:07:23.753 "rw_mbytes_per_sec": 0, 00:07:23.753 "r_mbytes_per_sec": 0, 00:07:23.753 "w_mbytes_per_sec": 0 00:07:23.753 }, 00:07:23.753 "claimed": false, 00:07:23.753 "zoned": false, 00:07:23.753 "supported_io_types": { 00:07:23.753 "read": true, 00:07:23.753 "write": true, 00:07:23.753 "unmap": true, 00:07:23.753 "flush": false, 00:07:23.753 "reset": true, 00:07:23.753 "nvme_admin": false, 00:07:23.753 "nvme_io": false, 00:07:23.753 "nvme_io_md": false, 00:07:23.753 "write_zeroes": true, 00:07:23.753 "zcopy": false, 00:07:23.753 "get_zone_info": false, 00:07:23.753 "zone_management": false, 00:07:23.753 "zone_append": false, 00:07:23.753 "compare": false, 00:07:23.753 "compare_and_write": false, 00:07:23.753 "abort": false, 00:07:23.753 "seek_hole": true, 00:07:23.753 "seek_data": true, 00:07:23.753 "copy": false, 00:07:23.753 "nvme_iov_md": false 00:07:23.753 }, 00:07:23.753 "driver_specific": { 00:07:23.753 "lvol": { 00:07:23.753 "lvol_store_uuid": "ea32181f-e34f-477d-a889-3dbae72a016f", 00:07:23.753 "base_bdev": "aio_bdev", 00:07:23.753 "thin_provision": false, 00:07:23.753 "num_allocated_clusters": 38, 00:07:23.753 "snapshot": false, 00:07:23.753 "clone": false, 00:07:23.753 "esnap_clone": false 00:07:23.753 } 00:07:23.753 } 00:07:23.753 } 00:07:23.753 ] 00:07:23.753 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:23.753 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea32181f-e34f-477d-a889-3dbae72a016f 00:07:23.753 16:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:24.012 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:24.012 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea32181f-e34f-477d-a889-3dbae72a016f 00:07:24.012 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:24.271 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:24.271 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:24.271 [2024-11-20 16:08:55.475854] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea32181f-e34f-477d-a889-3dbae72a016f 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea32181f-e34f-477d-a889-3dbae72a016f 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea32181f-e34f-477d-a889-3dbae72a016f 00:07:24.530 request: 00:07:24.530 { 00:07:24.530 "uuid": "ea32181f-e34f-477d-a889-3dbae72a016f", 00:07:24.530 "method": "bdev_lvol_get_lvstores", 00:07:24.530 "req_id": 1 00:07:24.530 } 00:07:24.530 Got JSON-RPC error response 00:07:24.530 response: 00:07:24.530 { 00:07:24.530 "code": -19, 00:07:24.530 "message": "No such device" 00:07:24.530 } 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.530 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:24.789 aio_bdev 00:07:24.789 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cc5edb5d-1763-4895-af9d-91a224c10836 00:07:24.789 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=cc5edb5d-1763-4895-af9d-91a224c10836 00:07:24.789 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:24.789 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:24.789 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:24.789 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:24.789 16:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:25.048 16:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cc5edb5d-1763-4895-af9d-91a224c10836 -t 2000 00:07:25.048 [ 00:07:25.048 { 00:07:25.048 "name": "cc5edb5d-1763-4895-af9d-91a224c10836", 00:07:25.048 "aliases": [ 00:07:25.048 "lvs/lvol" 00:07:25.048 ], 00:07:25.048 "product_name": "Logical Volume", 00:07:25.048 "block_size": 4096, 00:07:25.048 "num_blocks": 38912, 00:07:25.048 "uuid": "cc5edb5d-1763-4895-af9d-91a224c10836", 00:07:25.048 "assigned_rate_limits": { 00:07:25.048 "rw_ios_per_sec": 0, 00:07:25.048 "rw_mbytes_per_sec": 0, 00:07:25.048 "r_mbytes_per_sec": 0, 00:07:25.048 "w_mbytes_per_sec": 0 00:07:25.048 }, 00:07:25.048 "claimed": false, 00:07:25.048 "zoned": false, 00:07:25.048 "supported_io_types": { 00:07:25.048 "read": true, 00:07:25.048 "write": true, 00:07:25.048 "unmap": true, 00:07:25.048 "flush": false, 00:07:25.048 "reset": true, 00:07:25.048 "nvme_admin": false, 00:07:25.048 "nvme_io": false, 00:07:25.048 "nvme_io_md": false, 00:07:25.048 "write_zeroes": true, 00:07:25.048 "zcopy": false, 00:07:25.048 "get_zone_info": false, 00:07:25.048 "zone_management": false, 00:07:25.048 "zone_append": false, 00:07:25.048 "compare": false, 00:07:25.048 "compare_and_write": false, 00:07:25.048 "abort": false, 00:07:25.048 "seek_hole": true, 00:07:25.048 "seek_data": true, 00:07:25.048 "copy": false, 00:07:25.048 "nvme_iov_md": false 00:07:25.048 }, 00:07:25.048 "driver_specific": { 00:07:25.048 "lvol": { 00:07:25.048 "lvol_store_uuid": "ea32181f-e34f-477d-a889-3dbae72a016f", 00:07:25.048 "base_bdev": "aio_bdev", 00:07:25.048 "thin_provision": false, 00:07:25.048 "num_allocated_clusters": 38, 00:07:25.048 "snapshot": false, 00:07:25.048 "clone": false, 00:07:25.048 "esnap_clone": false 00:07:25.048 } 00:07:25.048 } 00:07:25.048 } 00:07:25.048 ] 00:07:25.048 16:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:25.048 16:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea32181f-e34f-477d-a889-3dbae72a016f 00:07:25.049 16:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:25.307 16:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:25.307 16:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea32181f-e34f-477d-a889-3dbae72a016f 00:07:25.307 16:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:25.566 16:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:25.566 16:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cc5edb5d-1763-4895-af9d-91a224c10836 00:07:25.825 16:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ea32181f-e34f-477d-a889-3dbae72a016f 00:07:25.825 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.084 00:07:26.084 real 0m16.955s 00:07:26.084 user 0m45.221s 00:07:26.084 sys 0m3.647s 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:26.084 ************************************ 00:07:26.084 END TEST lvs_grow_dirty 00:07:26.084 ************************************ 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:26.084 nvmf_trace.0 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:26.084 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:26.343 rmmod nvme_tcp 00:07:26.343 rmmod nvme_fabrics 00:07:26.343 rmmod nvme_keyring 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1777645 ']' 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1777645 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1777645 ']' 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1777645 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1777645 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1777645' 00:07:26.343 killing process with pid 1777645 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1777645 00:07:26.343 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1777645 00:07:26.602 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:26.602 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:26.602 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:26.602 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:26.602 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:26.602 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:26.602 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:26.602 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:26.602 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:26.602 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.602 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.602 16:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.616 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:28.616 00:07:28.616 real 0m43.182s 00:07:28.616 user 1m6.996s 00:07:28.616 sys 0m10.106s 00:07:28.616 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.616 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:28.616 ************************************ 00:07:28.616 END TEST nvmf_lvs_grow 00:07:28.616 ************************************ 00:07:28.616 16:08:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:28.616 16:08:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:28.616 16:08:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.616 16:08:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:28.616 ************************************ 00:07:28.616 START TEST nvmf_bdev_io_wait 00:07:28.616 ************************************ 00:07:28.616 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:28.876 * Looking for test storage... 00:07:28.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.876 --rc genhtml_branch_coverage=1 00:07:28.876 --rc genhtml_function_coverage=1 00:07:28.876 --rc genhtml_legend=1 00:07:28.876 --rc geninfo_all_blocks=1 00:07:28.876 --rc geninfo_unexecuted_blocks=1 00:07:28.876 00:07:28.876 ' 00:07:28.876 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.876 --rc genhtml_branch_coverage=1 00:07:28.876 --rc genhtml_function_coverage=1 00:07:28.876 --rc genhtml_legend=1 00:07:28.877 --rc geninfo_all_blocks=1 00:07:28.877 --rc geninfo_unexecuted_blocks=1 00:07:28.877 00:07:28.877 ' 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.877 --rc genhtml_branch_coverage=1 00:07:28.877 --rc genhtml_function_coverage=1 00:07:28.877 --rc genhtml_legend=1 00:07:28.877 --rc geninfo_all_blocks=1 00:07:28.877 --rc geninfo_unexecuted_blocks=1 00:07:28.877 00:07:28.877 ' 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.877 --rc genhtml_branch_coverage=1 00:07:28.877 --rc genhtml_function_coverage=1 00:07:28.877 --rc genhtml_legend=1 00:07:28.877 --rc geninfo_all_blocks=1 00:07:28.877 --rc geninfo_unexecuted_blocks=1 00:07:28.877 00:07:28.877 ' 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:28.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:28.877 16:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.445 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:35.446 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:35.446 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:35.446 Found net devices under 0000:86:00.0: cvl_0_0 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:35.446 Found net devices under 0000:86:00.1: cvl_0_1 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:35.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:07:35.446 00:07:35.446 --- 10.0.0.2 ping statistics --- 00:07:35.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.446 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:07:35.446 00:07:35.446 --- 10.0.0.1 ping statistics --- 00:07:35.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.446 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1782105 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1782105 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1782105 ']' 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.446 16:09:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.446 [2024-11-20 16:09:06.042140] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:07:35.446 [2024-11-20 16:09:06.042188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.447 [2024-11-20 16:09:06.123589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.447 [2024-11-20 16:09:06.168582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.447 [2024-11-20 16:09:06.168621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.447 [2024-11-20 16:09:06.168628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.447 [2024-11-20 16:09:06.168636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.447 [2024-11-20 16:09:06.168644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.447 [2024-11-20 16:09:06.170361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.447 [2024-11-20 16:09:06.170469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.447 [2024-11-20 16:09:06.170552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.447 [2024-11-20 16:09:06.170553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.447 [2024-11-20 16:09:06.322842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.447 Malloc0 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.447 [2024-11-20 16:09:06.366284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1782405 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1782408 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.447 { 00:07:35.447 "params": { 00:07:35.447 "name": "Nvme$subsystem", 00:07:35.447 "trtype": "$TEST_TRANSPORT", 00:07:35.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.447 "adrfam": "ipv4", 00:07:35.447 "trsvcid": "$NVMF_PORT", 00:07:35.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.447 "hdgst": ${hdgst:-false}, 00:07:35.447 "ddgst": ${ddgst:-false} 00:07:35.447 }, 00:07:35.447 "method": "bdev_nvme_attach_controller" 00:07:35.447 } 00:07:35.447 EOF 00:07:35.447 )") 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1782412 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.447 { 00:07:35.447 "params": { 00:07:35.447 "name": "Nvme$subsystem", 00:07:35.447 "trtype": "$TEST_TRANSPORT", 00:07:35.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.447 "adrfam": "ipv4", 00:07:35.447 "trsvcid": "$NVMF_PORT", 00:07:35.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.447 "hdgst": ${hdgst:-false}, 00:07:35.447 "ddgst": ${ddgst:-false} 00:07:35.447 }, 00:07:35.447 "method": "bdev_nvme_attach_controller" 00:07:35.447 } 00:07:35.447 EOF 00:07:35.447 )") 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1782417 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.447 { 00:07:35.447 "params": { 00:07:35.447 "name": "Nvme$subsystem", 00:07:35.447 "trtype": "$TEST_TRANSPORT", 00:07:35.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.447 "adrfam": "ipv4", 00:07:35.447 "trsvcid": "$NVMF_PORT", 00:07:35.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.447 "hdgst": ${hdgst:-false}, 00:07:35.447 "ddgst": ${ddgst:-false} 00:07:35.447 }, 00:07:35.447 "method": "bdev_nvme_attach_controller" 00:07:35.447 } 00:07:35.447 EOF 00:07:35.447 )") 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.447 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.447 { 00:07:35.447 "params": { 00:07:35.447 "name": "Nvme$subsystem", 00:07:35.447 "trtype": "$TEST_TRANSPORT", 00:07:35.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.447 "adrfam": "ipv4", 00:07:35.447 "trsvcid": "$NVMF_PORT", 00:07:35.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.447 "hdgst": ${hdgst:-false}, 00:07:35.447 "ddgst": ${ddgst:-false} 00:07:35.447 }, 00:07:35.447 "method": "bdev_nvme_attach_controller" 00:07:35.447 } 00:07:35.447 EOF 00:07:35.448 )") 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1782405 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.448 "params": { 00:07:35.448 "name": "Nvme1", 00:07:35.448 "trtype": "tcp", 00:07:35.448 "traddr": "10.0.0.2", 00:07:35.448 "adrfam": "ipv4", 00:07:35.448 "trsvcid": "4420", 00:07:35.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:35.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:35.448 "hdgst": false, 00:07:35.448 "ddgst": false 00:07:35.448 }, 00:07:35.448 "method": "bdev_nvme_attach_controller" 00:07:35.448 }' 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.448 "params": { 00:07:35.448 "name": "Nvme1", 00:07:35.448 "trtype": "tcp", 00:07:35.448 "traddr": "10.0.0.2", 00:07:35.448 "adrfam": "ipv4", 00:07:35.448 "trsvcid": "4420", 00:07:35.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:35.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:35.448 "hdgst": false, 00:07:35.448 "ddgst": false 00:07:35.448 }, 00:07:35.448 "method": "bdev_nvme_attach_controller" 00:07:35.448 }' 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.448 "params": { 00:07:35.448 "name": "Nvme1", 00:07:35.448 "trtype": "tcp", 00:07:35.448 "traddr": "10.0.0.2", 00:07:35.448 "adrfam": "ipv4", 00:07:35.448 "trsvcid": "4420", 00:07:35.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:35.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:35.448 "hdgst": false, 00:07:35.448 "ddgst": false 00:07:35.448 }, 00:07:35.448 "method": "bdev_nvme_attach_controller" 00:07:35.448 }' 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:35.448 16:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.448 "params": { 00:07:35.448 "name": "Nvme1", 00:07:35.448 "trtype": "tcp", 00:07:35.448 "traddr": "10.0.0.2", 00:07:35.448 "adrfam": "ipv4", 00:07:35.448 "trsvcid": "4420", 00:07:35.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:35.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:35.448 "hdgst": false, 00:07:35.448 "ddgst": false 00:07:35.448 }, 00:07:35.448 "method": "bdev_nvme_attach_controller" 00:07:35.448 }' 00:07:35.448 [2024-11-20 16:09:06.415823] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:07:35.448 [2024-11-20 16:09:06.415873] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:35.448 [2024-11-20 16:09:06.421459] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:07:35.448 [2024-11-20 16:09:06.421503] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:35.448 [2024-11-20 16:09:06.422162] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:07:35.448 [2024-11-20 16:09:06.422167] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:07:35.448 [2024-11-20 16:09:06.422208] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 16:09:06.422209] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:35.448 --proc-type=auto ] 00:07:35.448 [2024-11-20 16:09:06.597858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.448 [2024-11-20 16:09:06.640248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:35.707 [2024-11-20 16:09:06.690602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.707 [2024-11-20 16:09:06.730992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:35.707 [2024-11-20 16:09:06.814542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.707 [2024-11-20 16:09:06.862263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.707 [2024-11-20 16:09:06.863932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:35.707 [2024-11-20 16:09:06.904719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:35.969 Running I/O for 1 seconds... 00:07:35.969 Running I/O for 1 seconds... 00:07:35.969 Running I/O for 1 seconds... 00:07:35.969 Running I/O for 1 seconds... 00:07:36.905 12235.00 IOPS, 47.79 MiB/s 00:07:36.905 Latency(us) 00:07:36.905 [2024-11-20T15:09:08.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.905 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:36.905 Nvme1n1 : 1.01 12295.85 48.03 0.00 0.00 10375.37 5430.13 16227.96 00:07:36.905 [2024-11-20T15:09:08.139Z] =================================================================================================================== 00:07:36.905 [2024-11-20T15:09:08.139Z] Total : 12295.85 48.03 0.00 0.00 10375.37 5430.13 16227.96 00:07:36.905 11039.00 IOPS, 43.12 MiB/s 00:07:36.905 Latency(us) 00:07:36.905 [2024-11-20T15:09:08.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.905 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:36.905 Nvme1n1 : 1.01 11106.65 43.39 0.00 0.00 11488.69 4618.73 19723.22 00:07:36.905 [2024-11-20T15:09:08.139Z] =================================================================================================================== 00:07:36.905 [2024-11-20T15:09:08.139Z] Total : 11106.65 43.39 0.00 0.00 11488.69 4618.73 19723.22 00:07:36.905 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1782408 00:07:36.905 10548.00 IOPS, 41.20 MiB/s 00:07:36.905 Latency(us) 00:07:36.905 [2024-11-20T15:09:08.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.905 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:36.905 Nvme1n1 : 1.01 10620.09 41.48 0.00 0.00 12017.50 4400.27 22219.82 00:07:36.905 [2024-11-20T15:09:08.139Z] =================================================================================================================== 00:07:36.905 [2024-11-20T15:09:08.139Z] Total : 10620.09 41.48 0.00 0.00 12017.50 4400.27 22219.82 00:07:37.164 246736.00 IOPS, 963.81 MiB/s 00:07:37.165 Latency(us) 00:07:37.165 [2024-11-20T15:09:08.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.165 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:37.165 Nvme1n1 : 1.00 246342.22 962.27 0.00 0.00 516.77 219.43 1575.98 00:07:37.165 [2024-11-20T15:09:08.399Z] =================================================================================================================== 00:07:37.165 [2024-11-20T15:09:08.399Z] Total : 246342.22 962.27 0.00 0.00 516.77 219.43 1575.98 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1782412 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1782417 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:37.165 rmmod nvme_tcp 00:07:37.165 rmmod nvme_fabrics 00:07:37.165 rmmod nvme_keyring 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1782105 ']' 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1782105 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1782105 ']' 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1782105 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.165 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1782105 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1782105' 00:07:37.424 killing process with pid 1782105 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1782105 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1782105 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.424 16:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.961 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:39.962 00:07:39.962 real 0m10.882s 00:07:39.962 user 0m16.268s 00:07:39.962 sys 0m6.380s 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.962 ************************************ 00:07:39.962 END TEST nvmf_bdev_io_wait 00:07:39.962 ************************************ 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:39.962 ************************************ 00:07:39.962 START TEST nvmf_queue_depth 00:07:39.962 ************************************ 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:39.962 * Looking for test storage... 00:07:39.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.962 --rc genhtml_branch_coverage=1 00:07:39.962 --rc genhtml_function_coverage=1 00:07:39.962 --rc genhtml_legend=1 00:07:39.962 --rc geninfo_all_blocks=1 00:07:39.962 --rc geninfo_unexecuted_blocks=1 00:07:39.962 00:07:39.962 ' 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.962 --rc genhtml_branch_coverage=1 00:07:39.962 --rc genhtml_function_coverage=1 00:07:39.962 --rc genhtml_legend=1 00:07:39.962 --rc geninfo_all_blocks=1 00:07:39.962 --rc geninfo_unexecuted_blocks=1 00:07:39.962 00:07:39.962 ' 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.962 --rc genhtml_branch_coverage=1 00:07:39.962 --rc genhtml_function_coverage=1 00:07:39.962 --rc genhtml_legend=1 00:07:39.962 --rc geninfo_all_blocks=1 00:07:39.962 --rc geninfo_unexecuted_blocks=1 00:07:39.962 00:07:39.962 ' 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.962 --rc genhtml_branch_coverage=1 00:07:39.962 --rc genhtml_function_coverage=1 00:07:39.962 --rc genhtml_legend=1 00:07:39.962 --rc geninfo_all_blocks=1 00:07:39.962 --rc geninfo_unexecuted_blocks=1 00:07:39.962 00:07:39.962 ' 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.962 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:39.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:39.963 16:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:46.534 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:46.534 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:46.534 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:46.535 Found net devices under 0000:86:00.0: cvl_0_0 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:46.535 Found net devices under 0000:86:00.1: cvl_0_1 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:46.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:07:46.535 00:07:46.535 --- 10.0.0.2 ping statistics --- 00:07:46.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.535 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:07:46.535 00:07:46.535 --- 10.0.0.1 ping statistics --- 00:07:46.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.535 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1786270 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1786270 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1786270 ']' 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.535 16:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.535 [2024-11-20 16:09:17.005450] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:07:46.535 [2024-11-20 16:09:17.005497] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.535 [2024-11-20 16:09:17.086216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.535 [2024-11-20 16:09:17.127176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.535 [2024-11-20 16:09:17.127215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.535 [2024-11-20 16:09:17.127223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.535 [2024-11-20 16:09:17.127229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.535 [2024-11-20 16:09:17.127234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.535 [2024-11-20 16:09:17.127761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.535 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.535 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:46.535 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:46.535 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:46.535 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.535 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.535 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.535 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.536 [2024-11-20 16:09:17.263552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.536 Malloc0 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.536 [2024-11-20 16:09:17.313801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1786298 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1786298 /var/tmp/bdevperf.sock 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1786298 ']' 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:46.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.536 [2024-11-20 16:09:17.364021] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:07:46.536 [2024-11-20 16:09:17.364061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1786298 ] 00:07:46.536 [2024-11-20 16:09:17.438836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.536 [2024-11-20 16:09:17.481092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.536 NVMe0n1 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.536 16:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:46.536 Running I/O for 10 seconds... 00:07:48.850 11763.00 IOPS, 45.95 MiB/s [2024-11-20T15:09:21.021Z] 12250.50 IOPS, 47.85 MiB/s [2024-11-20T15:09:21.959Z] 12261.33 IOPS, 47.90 MiB/s [2024-11-20T15:09:22.896Z] 12280.00 IOPS, 47.97 MiB/s [2024-11-20T15:09:23.832Z] 12296.00 IOPS, 48.03 MiB/s [2024-11-20T15:09:24.769Z] 12387.00 IOPS, 48.39 MiB/s [2024-11-20T15:09:26.146Z] 12411.86 IOPS, 48.48 MiB/s [2024-11-20T15:09:27.082Z] 12403.25 IOPS, 48.45 MiB/s [2024-11-20T15:09:28.016Z] 12445.78 IOPS, 48.62 MiB/s [2024-11-20T15:09:28.016Z] 12454.30 IOPS, 48.65 MiB/s 00:07:56.782 Latency(us) 00:07:56.782 [2024-11-20T15:09:28.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.782 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:56.782 Verification LBA range: start 0x0 length 0x4000 00:07:56.782 NVMe0n1 : 10.06 12468.17 48.70 0.00 0.00 81841.44 19099.06 53177.78 00:07:56.782 [2024-11-20T15:09:28.016Z] =================================================================================================================== 00:07:56.782 [2024-11-20T15:09:28.016Z] Total : 12468.17 48.70 0.00 0.00 81841.44 19099.06 53177.78 00:07:56.782 { 00:07:56.782 "results": [ 00:07:56.782 { 00:07:56.782 "job": "NVMe0n1", 00:07:56.782 "core_mask": "0x1", 00:07:56.782 "workload": "verify", 00:07:56.782 "status": "finished", 00:07:56.782 "verify_range": { 00:07:56.782 "start": 0, 00:07:56.782 "length": 16384 00:07:56.782 }, 00:07:56.782 "queue_depth": 1024, 00:07:56.782 "io_size": 4096, 00:07:56.782 "runtime": 10.061863, 00:07:56.782 "iops": 12468.168171242243, 00:07:56.782 "mibps": 48.70378191891501, 00:07:56.782 "io_failed": 0, 00:07:56.782 "io_timeout": 0, 00:07:56.782 "avg_latency_us": 81841.44301007436, 00:07:56.782 "min_latency_us": 19099.062857142857, 00:07:56.782 "max_latency_us": 53177.782857142854 00:07:56.782 } 00:07:56.782 ], 00:07:56.782 "core_count": 1 00:07:56.782 } 00:07:56.782 16:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1786298 00:07:56.782 16:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1786298 ']' 00:07:56.782 16:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1786298 00:07:56.782 16:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:56.782 16:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.782 16:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1786298 00:07:56.782 16:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.782 16:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.782 16:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1786298' 00:07:56.782 killing process with pid 1786298 00:07:56.782 16:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1786298 00:07:56.782 Received shutdown signal, test time was about 10.000000 seconds 00:07:56.782 00:07:56.782 Latency(us) 00:07:56.782 [2024-11-20T15:09:28.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.782 [2024-11-20T15:09:28.016Z] =================================================================================================================== 00:07:56.782 [2024-11-20T15:09:28.016Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:56.782 16:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1786298 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.042 rmmod nvme_tcp 00:07:57.042 rmmod nvme_fabrics 00:07:57.042 rmmod nvme_keyring 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1786270 ']' 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1786270 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1786270 ']' 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1786270 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1786270 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1786270' 00:07:57.042 killing process with pid 1786270 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1786270 00:07:57.042 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1786270 00:07:57.300 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:57.300 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:57.300 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:57.300 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:57.300 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:57.300 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:57.300 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:57.301 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.301 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:57.301 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.301 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.301 16:09:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.204 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:59.204 00:07:59.204 real 0m19.709s 00:07:59.204 user 0m22.888s 00:07:59.204 sys 0m6.150s 00:07:59.204 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.204 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.204 ************************************ 00:07:59.204 END TEST nvmf_queue_depth 00:07:59.204 ************************************ 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.464 ************************************ 00:07:59.464 START TEST nvmf_target_multipath 00:07:59.464 ************************************ 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:59.464 * Looking for test storage... 00:07:59.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.464 --rc genhtml_branch_coverage=1 00:07:59.464 --rc genhtml_function_coverage=1 00:07:59.464 --rc genhtml_legend=1 00:07:59.464 --rc geninfo_all_blocks=1 00:07:59.464 --rc geninfo_unexecuted_blocks=1 00:07:59.464 00:07:59.464 ' 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.464 --rc genhtml_branch_coverage=1 00:07:59.464 --rc genhtml_function_coverage=1 00:07:59.464 --rc genhtml_legend=1 00:07:59.464 --rc geninfo_all_blocks=1 00:07:59.464 --rc geninfo_unexecuted_blocks=1 00:07:59.464 00:07:59.464 ' 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.464 --rc genhtml_branch_coverage=1 00:07:59.464 --rc genhtml_function_coverage=1 00:07:59.464 --rc genhtml_legend=1 00:07:59.464 --rc geninfo_all_blocks=1 00:07:59.464 --rc geninfo_unexecuted_blocks=1 00:07:59.464 00:07:59.464 ' 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.464 --rc genhtml_branch_coverage=1 00:07:59.464 --rc genhtml_function_coverage=1 00:07:59.464 --rc genhtml_legend=1 00:07:59.464 --rc geninfo_all_blocks=1 00:07:59.464 --rc geninfo_unexecuted_blocks=1 00:07:59.464 00:07:59.464 ' 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.464 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.724 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.725 16:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:06.298 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:06.298 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:06.298 Found net devices under 0000:86:00.0: cvl_0_0 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:06.298 Found net devices under 0000:86:00.1: cvl_0_1 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:06.298 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:06.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:08:06.299 00:08:06.299 --- 10.0.0.2 ping statistics --- 00:08:06.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.299 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:08:06.299 00:08:06.299 --- 10.0.0.1 ping statistics --- 00:08:06.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.299 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:06.299 only one NIC for nvmf test 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.299 rmmod nvme_tcp 00:08:06.299 rmmod nvme_fabrics 00:08:06.299 rmmod nvme_keyring 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.299 16:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:07.677 00:08:07.677 real 0m8.381s 00:08:07.677 user 0m1.871s 00:08:07.677 sys 0m4.527s 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.677 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:07.677 ************************************ 00:08:07.677 END TEST nvmf_target_multipath 00:08:07.677 ************************************ 00:08:07.938 16:09:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:07.938 16:09:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.938 16:09:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.938 16:09:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.938 ************************************ 00:08:07.938 START TEST nvmf_zcopy 00:08:07.938 ************************************ 00:08:07.938 16:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:07.938 * Looking for test storage... 00:08:07.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:07.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.938 --rc genhtml_branch_coverage=1 00:08:07.938 --rc genhtml_function_coverage=1 00:08:07.938 --rc genhtml_legend=1 00:08:07.938 --rc geninfo_all_blocks=1 00:08:07.938 --rc geninfo_unexecuted_blocks=1 00:08:07.938 00:08:07.938 ' 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:07.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.938 --rc genhtml_branch_coverage=1 00:08:07.938 --rc genhtml_function_coverage=1 00:08:07.938 --rc genhtml_legend=1 00:08:07.938 --rc geninfo_all_blocks=1 00:08:07.938 --rc geninfo_unexecuted_blocks=1 00:08:07.938 00:08:07.938 ' 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:07.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.938 --rc genhtml_branch_coverage=1 00:08:07.938 --rc genhtml_function_coverage=1 00:08:07.938 --rc genhtml_legend=1 00:08:07.938 --rc geninfo_all_blocks=1 00:08:07.938 --rc geninfo_unexecuted_blocks=1 00:08:07.938 00:08:07.938 ' 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:07.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.938 --rc genhtml_branch_coverage=1 00:08:07.938 --rc genhtml_function_coverage=1 00:08:07.938 --rc genhtml_legend=1 00:08:07.938 --rc geninfo_all_blocks=1 00:08:07.938 --rc geninfo_unexecuted_blocks=1 00:08:07.938 00:08:07.938 ' 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.938 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.939 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:08.209 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:08.209 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:08.209 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.210 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.210 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.210 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:08.210 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:08.210 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:08.210 16:09:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.782 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:14.783 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:14.783 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:14.783 Found net devices under 0000:86:00.0: cvl_0_0 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:14.783 Found net devices under 0000:86:00.1: cvl_0_1 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.783 16:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:14.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:08:14.783 00:08:14.783 --- 10.0.0.2 ping statistics --- 00:08:14.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.783 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:08:14.783 00:08:14.783 --- 10.0.0.1 ping statistics --- 00:08:14.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.783 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:14.783 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1795192 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1795192 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1795192 ']' 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.784 [2024-11-20 16:09:45.280347] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:08:14.784 [2024-11-20 16:09:45.280398] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.784 [2024-11-20 16:09:45.363268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.784 [2024-11-20 16:09:45.402279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.784 [2024-11-20 16:09:45.402313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.784 [2024-11-20 16:09:45.402321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.784 [2024-11-20 16:09:45.402326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.784 [2024-11-20 16:09:45.402331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.784 [2024-11-20 16:09:45.402900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.784 [2024-11-20 16:09:45.551185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.784 [2024-11-20 16:09:45.571382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.784 malloc0 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:14.784 { 00:08:14.784 "params": { 00:08:14.784 "name": "Nvme$subsystem", 00:08:14.784 "trtype": "$TEST_TRANSPORT", 00:08:14.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.784 "adrfam": "ipv4", 00:08:14.784 "trsvcid": "$NVMF_PORT", 00:08:14.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.784 "hdgst": ${hdgst:-false}, 00:08:14.784 "ddgst": ${ddgst:-false} 00:08:14.784 }, 00:08:14.784 "method": "bdev_nvme_attach_controller" 00:08:14.784 } 00:08:14.784 EOF 00:08:14.784 )") 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:14.784 16:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:14.784 "params": { 00:08:14.784 "name": "Nvme1", 00:08:14.784 "trtype": "tcp", 00:08:14.784 "traddr": "10.0.0.2", 00:08:14.784 "adrfam": "ipv4", 00:08:14.784 "trsvcid": "4420", 00:08:14.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.784 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.784 "hdgst": false, 00:08:14.784 "ddgst": false 00:08:14.784 }, 00:08:14.784 "method": "bdev_nvme_attach_controller" 00:08:14.784 }' 00:08:14.784 [2024-11-20 16:09:45.654164] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:08:14.784 [2024-11-20 16:09:45.654210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795401 ] 00:08:14.784 [2024-11-20 16:09:45.727623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.784 [2024-11-20 16:09:45.768116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.043 Running I/O for 10 seconds... 00:08:16.919 8684.00 IOPS, 67.84 MiB/s [2024-11-20T15:09:49.089Z] 8753.50 IOPS, 68.39 MiB/s [2024-11-20T15:09:50.467Z] 8782.33 IOPS, 68.61 MiB/s [2024-11-20T15:09:51.403Z] 8775.25 IOPS, 68.56 MiB/s [2024-11-20T15:09:52.338Z] 8766.40 IOPS, 68.49 MiB/s [2024-11-20T15:09:53.275Z] 8782.50 IOPS, 68.61 MiB/s [2024-11-20T15:09:54.213Z] 8788.86 IOPS, 68.66 MiB/s [2024-11-20T15:09:55.150Z] 8795.38 IOPS, 68.71 MiB/s [2024-11-20T15:09:56.087Z] 8795.89 IOPS, 68.72 MiB/s [2024-11-20T15:09:56.347Z] 8796.50 IOPS, 68.72 MiB/s 00:08:25.113 Latency(us) 00:08:25.113 [2024-11-20T15:09:56.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.113 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:25.113 Verification LBA range: start 0x0 length 0x1000 00:08:25.113 Nvme1n1 : 10.01 8798.86 68.74 0.00 0.00 14506.61 2153.33 21470.84 00:08:25.113 [2024-11-20T15:09:56.347Z] =================================================================================================================== 00:08:25.113 [2024-11-20T15:09:56.347Z] Total : 8798.86 68.74 0.00 0.00 14506.61 2153.33 21470.84 00:08:25.113 16:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1797050 00:08:25.113 16:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:25.113 16:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.113 16:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:25.113 16:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:25.113 16:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:25.113 16:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:25.113 16:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:25.113 16:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:25.113 { 00:08:25.113 "params": { 00:08:25.113 "name": "Nvme$subsystem", 00:08:25.113 "trtype": "$TEST_TRANSPORT", 00:08:25.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.113 "adrfam": "ipv4", 00:08:25.113 "trsvcid": "$NVMF_PORT", 00:08:25.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.113 "hdgst": ${hdgst:-false}, 00:08:25.113 "ddgst": ${ddgst:-false} 00:08:25.113 }, 00:08:25.113 "method": "bdev_nvme_attach_controller" 00:08:25.113 } 00:08:25.113 EOF 00:08:25.113 )") 00:08:25.113 16:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:25.113 [2024-11-20 16:09:56.254386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.113 [2024-11-20 16:09:56.254421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.113 16:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:25.113 16:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:25.113 16:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:25.113 "params": { 00:08:25.113 "name": "Nvme1", 00:08:25.113 "trtype": "tcp", 00:08:25.113 "traddr": "10.0.0.2", 00:08:25.113 "adrfam": "ipv4", 00:08:25.113 "trsvcid": "4420", 00:08:25.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:25.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:25.113 "hdgst": false, 00:08:25.113 "ddgst": false 00:08:25.113 }, 00:08:25.113 "method": "bdev_nvme_attach_controller" 00:08:25.113 }' 00:08:25.113 [2024-11-20 16:09:56.266382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.113 [2024-11-20 16:09:56.266395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.113 [2024-11-20 16:09:56.278407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.113 [2024-11-20 16:09:56.278417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.113 [2024-11-20 16:09:56.290440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.113 [2024-11-20 16:09:56.290454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.113 [2024-11-20 16:09:56.292022] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:08:25.113 [2024-11-20 16:09:56.292061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1797050 ] 00:08:25.113 [2024-11-20 16:09:56.302474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.113 [2024-11-20 16:09:56.302484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.113 [2024-11-20 16:09:56.314509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.113 [2024-11-20 16:09:56.314523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.113 [2024-11-20 16:09:56.326532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.113 [2024-11-20 16:09:56.326542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.113 [2024-11-20 16:09:56.338565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.113 [2024-11-20 16:09:56.338573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.350598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.350607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.362630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.362639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.363602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.373 [2024-11-20 16:09:56.374663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.374678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.386691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.386703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.398722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.398733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.405253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.373 [2024-11-20 16:09:56.410754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.410767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.422799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.422819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.434819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.434836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.446851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.446864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.458887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.458898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.470915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.470927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.482949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.482965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.494977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.494987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.507033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.507056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.519066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.519085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.531093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.531106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.543116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.543126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.555146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.555155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.567184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.567197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.579219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.579233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.373 [2024-11-20 16:09:56.591253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.373 [2024-11-20 16:09:56.591263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.374 [2024-11-20 16:09:56.603288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.374 [2024-11-20 16:09:56.603299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.615319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.615330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.627355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.627367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.639386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.639396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.651415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.651424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.663456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.663469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.675482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.675492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.687517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.687526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.699551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.699560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.711585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.711600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.723622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.723640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 Running I/O for 5 seconds... 00:08:25.633 [2024-11-20 16:09:56.735644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.735654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.751446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.751467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.765623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.765642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.779413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.779431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.793132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.793151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.806637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.806655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.820275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.820294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.833927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.833946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.847897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.847916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.633 [2024-11-20 16:09:56.861715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.633 [2024-11-20 16:09:56.861733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:56.875823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:56.875842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:56.887247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:56.887265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:56.901548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:56.901567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:56.915001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:56.915019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:56.924880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:56.924898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:56.938954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:56.938973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:56.953074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:56.953092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:56.962108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:56.962131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:56.976658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:56.976676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:56.990236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:56.990255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:57.004035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:57.004053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:57.017815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:57.017834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:57.031458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:57.031477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:57.045207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:57.045225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:57.059188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:57.059212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:57.073087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:57.073106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:57.086850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:57.086869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:57.100873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:57.100891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.892 [2024-11-20 16:09:57.114745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.892 [2024-11-20 16:09:57.114764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.129072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.129093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.139867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.139885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.153923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.153941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.167262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.167281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.181318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.181336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.195293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.195312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.208735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.208754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.222259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.222278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.235854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.235872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.249902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.249920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.263419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.263437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.277334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.277353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.286733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.286753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.300997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.301017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.314520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.314539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.328450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.328469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.342304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.342323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.356042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.356060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.369857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.369875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.213 [2024-11-20 16:09:57.383678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.213 [2024-11-20 16:09:57.383698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.397573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.397592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.411257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.411276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.424981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.424999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.438875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.438893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.452953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.452971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.466760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.466778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.480527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.480546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.494375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.494394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.508312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.508330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.522546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.522568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.536581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.536598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.550668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.550686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.564491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.564509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.578648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.578666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.590119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.590138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.604550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.604567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.618244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.618262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.631926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.631945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.646378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.646396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.657862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.517 [2024-11-20 16:09:57.657880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.517 [2024-11-20 16:09:57.671934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.518 [2024-11-20 16:09:57.671952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.518 [2024-11-20 16:09:57.685676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.518 [2024-11-20 16:09:57.685694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.518 [2024-11-20 16:09:57.699495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.518 [2024-11-20 16:09:57.699513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.518 [2024-11-20 16:09:57.712745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.518 [2024-11-20 16:09:57.712763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.726890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.726909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 16809.00 IOPS, 131.32 MiB/s [2024-11-20T15:09:58.012Z] [2024-11-20 16:09:57.740398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.740417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.754150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.754168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.768145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.768164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.781847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.781867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.796050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.796071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.810005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.810026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.823984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.824004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.834971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.834989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.848982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.849003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.862919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.862941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.877130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.877148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.891042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.891061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.905046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.905066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.918897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.918916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.932703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.932722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.946608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.946626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.960331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.960350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.974121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.974140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:57.987803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:57.987827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.778 [2024-11-20 16:09:58.001671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.778 [2024-11-20 16:09:58.001691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.015818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.015838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.029549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.029568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.038486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.038504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.053248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.053267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.064713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.064733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.079196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.079221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.093194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.093219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.107352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.107371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.120956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.120975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.134741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.134759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.148633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.148653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.162468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.162486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.176571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.176590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.190726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.190745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.202228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.202246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.216756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.216774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.230681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.230699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.244356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.244379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.038 [2024-11-20 16:09:58.258117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.038 [2024-11-20 16:09:58.258136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.298 [2024-11-20 16:09:58.271682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.298 [2024-11-20 16:09:58.271700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.298 [2024-11-20 16:09:58.280835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.298 [2024-11-20 16:09:58.280854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.298 [2024-11-20 16:09:58.295038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.298 [2024-11-20 16:09:58.295058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.298 [2024-11-20 16:09:58.308787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.298 [2024-11-20 16:09:58.308805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.298 [2024-11-20 16:09:58.322768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.298 [2024-11-20 16:09:58.322788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.337117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.337136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.350826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.350844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.361774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.361792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.375677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.375695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.389578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.389597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.403498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.403518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.417790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.417809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.428654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.428671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.442866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.442885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.456370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.456388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.470954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.470974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.482411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.482429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.496529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.496551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.510278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.510296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.299 [2024-11-20 16:09:58.523987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.299 [2024-11-20 16:09:58.524005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.538181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.538199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.552021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.552039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.565516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.565535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.579888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.579907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.593796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.593815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.607439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.607458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.621173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.621191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.634919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.634937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.648629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.648647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.662234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.662253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.675949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.675967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.689752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.689770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.703514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.703533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.717511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.717531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 [2024-11-20 16:09:58.731130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.558 [2024-11-20 16:09:58.731149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.558 16859.00 IOPS, 131.71 MiB/s [2024-11-20T15:09:58.792Z] [2024-11-20 16:09:58.745122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.559 [2024-11-20 16:09:58.745141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.559 [2024-11-20 16:09:58.759116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.559 [2024-11-20 16:09:58.759134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.559 [2024-11-20 16:09:58.772819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.559 [2024-11-20 16:09:58.772837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.559 [2024-11-20 16:09:58.786838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.559 [2024-11-20 16:09:58.786857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.800638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.800656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.814516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.814534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.828596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.828615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.842171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.842190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.855767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.855785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.869461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.869480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.883348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.883368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.896993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.897011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.910876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.910894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.924677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.924695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.938509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.938527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.951954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.951973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.965274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.965293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.979012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.979030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:58.992561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:58.992580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:59.006555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:59.006573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:59.020112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:59.020130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:59.033894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:59.033913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.818 [2024-11-20 16:09:59.048209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.818 [2024-11-20 16:09:59.048228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.059754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.059772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.073699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.073716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.087738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.087756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.101640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.101659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.115297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.115316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.128842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.128860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.142741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.142760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.156791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.156811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.167529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.167548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.182034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.182053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.196548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.196567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.211686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.211707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.225539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.225558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.239387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.239408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.253858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.253878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.269222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.269241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.282999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.283017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.078 [2024-11-20 16:09:59.296724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.078 [2024-11-20 16:09:59.296744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.310781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.310799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.322240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.322259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.336711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.336729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.350084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.350104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.359530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.359549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.373886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.373906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.388226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.388247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.399198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.399226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.413608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.413627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.427501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.427519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.440934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.440953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.454623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.454642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.469064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.469082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.484759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.484779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.498914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.498933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.512711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.512731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.526200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.526233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.539399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.539419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.553607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.553625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.338 [2024-11-20 16:09:59.567321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.338 [2024-11-20 16:09:59.567340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.597 [2024-11-20 16:09:59.580794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.597 [2024-11-20 16:09:59.580812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.597 [2024-11-20 16:09:59.595027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.597 [2024-11-20 16:09:59.595046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.597 [2024-11-20 16:09:59.605553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.597 [2024-11-20 16:09:59.605570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.597 [2024-11-20 16:09:59.619627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.619646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.633332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.633351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.642873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.642891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.657106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.657125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.670689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.670708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.684319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.684337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.698419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.698438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.712552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.712570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.723592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.723611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.737565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.737583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 16887.00 IOPS, 131.93 MiB/s [2024-11-20T15:09:59.832Z] [2024-11-20 16:09:59.751370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.751389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.764982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.765000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.778617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.778641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.792421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.792440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.806248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.806267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.598 [2024-11-20 16:09:59.820179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.598 [2024-11-20 16:09:59.820197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:09:59.833993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:09:59.834011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:09:59.847440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:09:59.847458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:09:59.861225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:09:59.861244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:09:59.875341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:09:59.875362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:09:59.888984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:09:59.889003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:09:59.902942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:09:59.902962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:09:59.916616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:09:59.916635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:09:59.930414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:09:59.930432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:09:59.944157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:09:59.944176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:09:59.958076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:09:59.958094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:09:59.972127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:09:59.972145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:09:59.985883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:09:59.985902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:09:59.999493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:09:59.999511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:10:00.013526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:10:00.013545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:10:00.028182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:10:00.028208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:10:00.037478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:10:00.037501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:10:00.052229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:10:00.052252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:10:00.066523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:10:00.066541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:10:00.075716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:10:00.075735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:10:00.084955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:10:00.084973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.871 [2024-11-20 16:10:00.094208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.871 [2024-11-20 16:10:00.094226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.102952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.102970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.110296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.110314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.120900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.120919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.129452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.129470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.138773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.138791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.147412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.147430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.161859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.161879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.175853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.175871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.189962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.189981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.203765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.203783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.217838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.217856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.231646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.231675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.245532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.245550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.259052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.259070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.273627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.273645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.288884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.288902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.303197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.303221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.317055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.317074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.330994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.331012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.344650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.344668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.131 [2024-11-20 16:10:00.358243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.131 [2024-11-20 16:10:00.358261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.371875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.371894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.386120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.386140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.397189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.397217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.412020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.412038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.425647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.425665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.439106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.439124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.453003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.453021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.466823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.466842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.480608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.480627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.494120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.494138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.508073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.508092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.521638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.521656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.535211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.535229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.548890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.548909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.562699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.562718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.576455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.576474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.589891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.589911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.603689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.603709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.390 [2024-11-20 16:10:00.617400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.390 [2024-11-20 16:10:00.617420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.631646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.631665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.645568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.645588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.659327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.659348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.673269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.673288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.687098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.687117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.701251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.701271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.715504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.715522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.729054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.729073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 16885.25 IOPS, 131.92 MiB/s [2024-11-20T15:10:00.884Z] [2024-11-20 16:10:00.743455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.743473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.754223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.754242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.768255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.768279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.781501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.781520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.795842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.795861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.809453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.809472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.823464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.823483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.837005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.837024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.851166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.851185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.865034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.865053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.650 [2024-11-20 16:10:00.879008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.650 [2024-11-20 16:10:00.879027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.909 [2024-11-20 16:10:00.892663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.909 [2024-11-20 16:10:00.892682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.909 [2024-11-20 16:10:00.906396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.909 [2024-11-20 16:10:00.906417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:00.920139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:00.920160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:00.934127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:00.934147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:00.948214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:00.948233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:00.962098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:00.962117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:00.975745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:00.975763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:00.989894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:00.989912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:01.003995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:01.004012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:01.017932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:01.017951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:01.031822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:01.031844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:01.045895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:01.045913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:01.056252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:01.056269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:01.070232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:01.070251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:01.084082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:01.084100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:01.098147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:01.098165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:01.108634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:01.108652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:01.123041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:01.123058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.910 [2024-11-20 16:10:01.136591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.910 [2024-11-20 16:10:01.136608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.150407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.150424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.164346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.164365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.177818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.177836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.191512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.191530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.205228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.205246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.218925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.218943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.232666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.232684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.246301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.246319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.260032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.260050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.273500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.273518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.287032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.287057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.300775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.300794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.314512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.314530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.328288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.328307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.341824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.341842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.355449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.355467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.369206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.369225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.383150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.383168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.169 [2024-11-20 16:10:01.396547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.169 [2024-11-20 16:10:01.396566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.428 [2024-11-20 16:10:01.410162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.428 [2024-11-20 16:10:01.410179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.428 [2024-11-20 16:10:01.423650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.428 [2024-11-20 16:10:01.423669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.428 [2024-11-20 16:10:01.437080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.428 [2024-11-20 16:10:01.437098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.428 [2024-11-20 16:10:01.451126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.428 [2024-11-20 16:10:01.451144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.428 [2024-11-20 16:10:01.464662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.428 [2024-11-20 16:10:01.464680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.428 [2024-11-20 16:10:01.478301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.428 [2024-11-20 16:10:01.478319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.428 [2024-11-20 16:10:01.491760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.428 [2024-11-20 16:10:01.491778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.428 [2024-11-20 16:10:01.505341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.428 [2024-11-20 16:10:01.505359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.429 [2024-11-20 16:10:01.519072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.429 [2024-11-20 16:10:01.519089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.429 [2024-11-20 16:10:01.532583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.429 [2024-11-20 16:10:01.532601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.429 [2024-11-20 16:10:01.546284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.429 [2024-11-20 16:10:01.546309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.429 [2024-11-20 16:10:01.560084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.429 [2024-11-20 16:10:01.560102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.429 [2024-11-20 16:10:01.573738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.429 [2024-11-20 16:10:01.573756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.429 [2024-11-20 16:10:01.587421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.429 [2024-11-20 16:10:01.587439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.429 [2024-11-20 16:10:01.601080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.429 [2024-11-20 16:10:01.601098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.429 [2024-11-20 16:10:01.614674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.429 [2024-11-20 16:10:01.614693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.429 [2024-11-20 16:10:01.628280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.429 [2024-11-20 16:10:01.628298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.429 [2024-11-20 16:10:01.642321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.429 [2024-11-20 16:10:01.642341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.429 [2024-11-20 16:10:01.652870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.429 [2024-11-20 16:10:01.652888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 [2024-11-20 16:10:01.666931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.666950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 [2024-11-20 16:10:01.681220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.681238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 [2024-11-20 16:10:01.697052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.697071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 [2024-11-20 16:10:01.710956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.710974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 [2024-11-20 16:10:01.724827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.724845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 [2024-11-20 16:10:01.738111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.738129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 16919.80 IOPS, 132.19 MiB/s [2024-11-20T15:10:01.922Z] [2024-11-20 16:10:01.749841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.749859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 00:08:30.688 Latency(us) 00:08:30.688 [2024-11-20T15:10:01.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.688 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:30.688 Nvme1n1 : 5.01 16922.33 132.21 0.00 0.00 7556.46 3510.86 15666.22 00:08:30.688 [2024-11-20T15:10:01.922Z] =================================================================================================================== 00:08:30.688 [2024-11-20T15:10:01.922Z] Total : 16922.33 132.21 0.00 0.00 7556.46 3510.86 15666.22 00:08:30.688 [2024-11-20 16:10:01.760392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.760407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 [2024-11-20 16:10:01.772423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.772436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 [2024-11-20 16:10:01.784459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.784479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 [2024-11-20 16:10:01.796487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.796503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 [2024-11-20 16:10:01.808519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.808531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 [2024-11-20 16:10:01.820547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.820560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 [2024-11-20 16:10:01.832580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.832592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.688 [2024-11-20 16:10:01.844610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.688 [2024-11-20 16:10:01.844624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.689 [2024-11-20 16:10:01.856643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.689 [2024-11-20 16:10:01.856657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.689 [2024-11-20 16:10:01.868672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.689 [2024-11-20 16:10:01.868681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.689 [2024-11-20 16:10:01.880709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.689 [2024-11-20 16:10:01.880719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.689 [2024-11-20 16:10:01.892739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.689 [2024-11-20 16:10:01.892751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.689 [2024-11-20 16:10:01.904766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.689 [2024-11-20 16:10:01.904775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1797050) - No such process 00:08:30.689 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1797050 00:08:30.689 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.689 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.689 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.947 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.947 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:30.947 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.947 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.947 delay0 00:08:30.947 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.947 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:30.947 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.947 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.947 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.947 16:10:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:30.947 [2024-11-20 16:10:02.012842] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:37.515 Initializing NVMe Controllers 00:08:37.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:37.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:37.515 Initialization complete. Launching workers. 00:08:37.515 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 139 00:08:37.515 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 426, failed to submit 33 00:08:37.515 success 228, unsuccessful 198, failed 0 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:37.515 rmmod nvme_tcp 00:08:37.515 rmmod nvme_fabrics 00:08:37.515 rmmod nvme_keyring 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1795192 ']' 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1795192 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1795192 ']' 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1795192 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1795192 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1795192' 00:08:37.515 killing process with pid 1795192 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1795192 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1795192 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.515 16:10:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.422 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:39.422 00:08:39.422 real 0m31.554s 00:08:39.422 user 0m41.978s 00:08:39.422 sys 0m11.293s 00:08:39.422 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.422 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.422 ************************************ 00:08:39.423 END TEST nvmf_zcopy 00:08:39.423 ************************************ 00:08:39.423 16:10:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:39.423 16:10:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:39.423 16:10:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.423 16:10:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.423 ************************************ 00:08:39.423 START TEST nvmf_nmic 00:08:39.423 ************************************ 00:08:39.423 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:39.683 * Looking for test storage... 00:08:39.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:39.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.683 --rc genhtml_branch_coverage=1 00:08:39.683 --rc genhtml_function_coverage=1 00:08:39.683 --rc genhtml_legend=1 00:08:39.683 --rc geninfo_all_blocks=1 00:08:39.683 --rc geninfo_unexecuted_blocks=1 00:08:39.683 00:08:39.683 ' 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:39.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.683 --rc genhtml_branch_coverage=1 00:08:39.683 --rc genhtml_function_coverage=1 00:08:39.683 --rc genhtml_legend=1 00:08:39.683 --rc geninfo_all_blocks=1 00:08:39.683 --rc geninfo_unexecuted_blocks=1 00:08:39.683 00:08:39.683 ' 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:39.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.683 --rc genhtml_branch_coverage=1 00:08:39.683 --rc genhtml_function_coverage=1 00:08:39.683 --rc genhtml_legend=1 00:08:39.683 --rc geninfo_all_blocks=1 00:08:39.683 --rc geninfo_unexecuted_blocks=1 00:08:39.683 00:08:39.683 ' 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:39.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.683 --rc genhtml_branch_coverage=1 00:08:39.683 --rc genhtml_function_coverage=1 00:08:39.683 --rc genhtml_legend=1 00:08:39.683 --rc geninfo_all_blocks=1 00:08:39.683 --rc geninfo_unexecuted_blocks=1 00:08:39.683 00:08:39.683 ' 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.683 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.684 16:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.252 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.252 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:46.252 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:46.252 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:46.252 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:46.252 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:46.253 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:46.253 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:46.253 Found net devices under 0000:86:00.0: cvl_0_0 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:46.253 Found net devices under 0000:86:00.1: cvl_0_1 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:46.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:08:46.253 00:08:46.253 --- 10.0.0.2 ping statistics --- 00:08:46.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.253 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:08:46.253 00:08:46.253 --- 10.0.0.1 ping statistics --- 00:08:46.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.253 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.253 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1802656 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1802656 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1802656 ']' 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.254 16:10:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.254 [2024-11-20 16:10:16.840070] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:08:46.254 [2024-11-20 16:10:16.840115] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.254 [2024-11-20 16:10:16.917347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.254 [2024-11-20 16:10:16.957811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.254 [2024-11-20 16:10:16.957849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.254 [2024-11-20 16:10:16.957855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.254 [2024-11-20 16:10:16.957861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.254 [2024-11-20 16:10:16.957865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.254 [2024-11-20 16:10:16.959302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.254 [2024-11-20 16:10:16.959409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.254 [2024-11-20 16:10:16.959518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.254 [2024-11-20 16:10:16.959520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.513 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.513 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:46.513 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:46.513 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:46.513 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.513 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.513 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.513 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.513 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.513 [2024-11-20 16:10:17.727227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.513 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.513 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:46.513 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.513 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.773 Malloc0 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.773 [2024-11-20 16:10:17.791930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:46.773 test case1: single bdev can't be used in multiple subsystems 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.773 [2024-11-20 16:10:17.819822] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:46.773 [2024-11-20 16:10:17.819841] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:46.773 [2024-11-20 16:10:17.819848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.773 request: 00:08:46.773 { 00:08:46.773 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:46.773 "namespace": { 00:08:46.773 "bdev_name": "Malloc0", 00:08:46.773 "no_auto_visible": false 00:08:46.773 }, 00:08:46.773 "method": "nvmf_subsystem_add_ns", 00:08:46.773 "req_id": 1 00:08:46.773 } 00:08:46.773 Got JSON-RPC error response 00:08:46.773 response: 00:08:46.773 { 00:08:46.773 "code": -32602, 00:08:46.773 "message": "Invalid parameters" 00:08:46.773 } 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:46.773 Adding namespace failed - expected result. 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:46.773 test case2: host connect to nvmf target in multiple paths 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.773 [2024-11-20 16:10:17.831962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.773 16:10:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:48.154 16:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:49.098 16:10:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:49.098 16:10:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:49.098 16:10:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:49.098 16:10:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:49.098 16:10:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:51.002 16:10:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:51.002 16:10:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:51.002 16:10:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:51.002 16:10:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:51.002 16:10:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:51.002 16:10:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:51.002 16:10:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:51.270 [global] 00:08:51.270 thread=1 00:08:51.270 invalidate=1 00:08:51.270 rw=write 00:08:51.270 time_based=1 00:08:51.270 runtime=1 00:08:51.270 ioengine=libaio 00:08:51.270 direct=1 00:08:51.270 bs=4096 00:08:51.270 iodepth=1 00:08:51.270 norandommap=0 00:08:51.270 numjobs=1 00:08:51.270 00:08:51.270 verify_dump=1 00:08:51.270 verify_backlog=512 00:08:51.270 verify_state_save=0 00:08:51.270 do_verify=1 00:08:51.270 verify=crc32c-intel 00:08:51.270 [job0] 00:08:51.270 filename=/dev/nvme0n1 00:08:51.270 Could not set queue depth (nvme0n1) 00:08:51.527 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:51.527 fio-3.35 00:08:51.527 Starting 1 thread 00:08:52.899 00:08:52.899 job0: (groupid=0, jobs=1): err= 0: pid=1803739: Wed Nov 20 16:10:23 2024 00:08:52.899 read: IOPS=22, BW=89.4KiB/s (91.6kB/s)(92.0KiB/1029msec) 00:08:52.899 slat (nsec): min=9433, max=23787, avg=20885.13, stdev=3347.86 00:08:52.899 clat (usec): min=40860, max=43955, avg=41104.76, stdev=624.26 00:08:52.899 lat (usec): min=40882, max=43972, avg=41125.65, stdev=623.24 00:08:52.899 clat percentiles (usec): 00:08:52.899 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:52.899 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:52.899 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:52.899 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:08:52.899 | 99.99th=[43779] 00:08:52.899 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:08:52.899 slat (nsec): min=10105, max=45271, avg=11130.81, stdev=2230.69 00:08:52.899 clat (usec): min=115, max=332, avg=148.68, stdev=19.35 00:08:52.899 lat (usec): min=125, max=377, avg=159.81, stdev=20.25 00:08:52.899 clat percentiles (usec): 00:08:52.899 | 1.00th=[ 120], 5.00th=[ 122], 10.00th=[ 123], 20.00th=[ 126], 00:08:52.899 | 30.00th=[ 133], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:08:52.899 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 172], 00:08:52.899 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 334], 99.95th=[ 334], 00:08:52.899 | 99.99th=[ 334] 00:08:52.899 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:52.899 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:52.899 lat (usec) : 250=95.51%, 500=0.19% 00:08:52.899 lat (msec) : 50=4.30% 00:08:52.899 cpu : usr=0.39%, sys=0.88%, ctx=535, majf=0, minf=1 00:08:52.899 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.899 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.899 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.899 00:08:52.899 Run status group 0 (all jobs): 00:08:52.899 READ: bw=89.4KiB/s (91.6kB/s), 89.4KiB/s-89.4KiB/s (91.6kB/s-91.6kB/s), io=92.0KiB (94.2kB), run=1029-1029msec 00:08:52.899 WRITE: bw=1990KiB/s (2038kB/s), 1990KiB/s-1990KiB/s (2038kB/s-2038kB/s), io=2048KiB (2097kB), run=1029-1029msec 00:08:52.899 00:08:52.899 Disk stats (read/write): 00:08:52.899 nvme0n1: ios=69/512, merge=0/0, ticks=1011/71, in_queue=1082, util=95.59% 00:08:52.899 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:52.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:52.899 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:52.899 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:52.900 rmmod nvme_tcp 00:08:52.900 rmmod nvme_fabrics 00:08:52.900 rmmod nvme_keyring 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1802656 ']' 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1802656 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1802656 ']' 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1802656 00:08:52.900 16:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:52.900 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.900 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1802656 00:08:52.900 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.900 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.900 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1802656' 00:08:52.900 killing process with pid 1802656 00:08:52.900 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1802656 00:08:52.900 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1802656 00:08:53.159 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:53.159 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:53.159 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:53.159 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:53.159 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:53.159 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:53.159 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:53.159 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.159 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:53.159 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.159 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.159 16:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:55.698 00:08:55.698 real 0m15.722s 00:08:55.698 user 0m36.673s 00:08:55.698 sys 0m5.271s 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:55.698 ************************************ 00:08:55.698 END TEST nvmf_nmic 00:08:55.698 ************************************ 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:55.698 ************************************ 00:08:55.698 START TEST nvmf_fio_target 00:08:55.698 ************************************ 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:55.698 * Looking for test storage... 00:08:55.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.698 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.698 --rc genhtml_branch_coverage=1 00:08:55.698 --rc genhtml_function_coverage=1 00:08:55.698 --rc genhtml_legend=1 00:08:55.698 --rc geninfo_all_blocks=1 00:08:55.698 --rc geninfo_unexecuted_blocks=1 00:08:55.698 00:08:55.698 ' 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.699 --rc genhtml_branch_coverage=1 00:08:55.699 --rc genhtml_function_coverage=1 00:08:55.699 --rc genhtml_legend=1 00:08:55.699 --rc geninfo_all_blocks=1 00:08:55.699 --rc geninfo_unexecuted_blocks=1 00:08:55.699 00:08:55.699 ' 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.699 --rc genhtml_branch_coverage=1 00:08:55.699 --rc genhtml_function_coverage=1 00:08:55.699 --rc genhtml_legend=1 00:08:55.699 --rc geninfo_all_blocks=1 00:08:55.699 --rc geninfo_unexecuted_blocks=1 00:08:55.699 00:08:55.699 ' 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.699 --rc genhtml_branch_coverage=1 00:08:55.699 --rc genhtml_function_coverage=1 00:08:55.699 --rc genhtml_legend=1 00:08:55.699 --rc geninfo_all_blocks=1 00:08:55.699 --rc geninfo_unexecuted_blocks=1 00:08:55.699 00:08:55.699 ' 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:55.699 16:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.272 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.272 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:02.272 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:02.272 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:02.272 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:02.272 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:02.272 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:02.272 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:02.272 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:02.272 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:02.273 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:02.273 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:02.273 Found net devices under 0000:86:00.0: cvl_0_0 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:02.273 Found net devices under 0000:86:00.1: cvl_0_1 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:02.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:09:02.273 00:09:02.273 --- 10.0.0.2 ping statistics --- 00:09:02.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.273 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:09:02.273 00:09:02.273 --- 10.0.0.1 ping statistics --- 00:09:02.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.273 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:02.273 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:02.274 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:02.274 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.274 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.274 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1807501 00:09:02.274 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:02.274 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1807501 00:09:02.274 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1807501 ']' 00:09:02.274 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.274 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.274 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.274 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.274 16:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.274 [2024-11-20 16:10:32.668987] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:09:02.274 [2024-11-20 16:10:32.669040] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.274 [2024-11-20 16:10:32.751180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.274 [2024-11-20 16:10:32.794026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.274 [2024-11-20 16:10:32.794061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.274 [2024-11-20 16:10:32.794069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.274 [2024-11-20 16:10:32.794076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.274 [2024-11-20 16:10:32.794081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.274 [2024-11-20 16:10:32.795569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.274 [2024-11-20 16:10:32.795677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.274 [2024-11-20 16:10:32.795784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.274 [2024-11-20 16:10:32.795785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.274 16:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.274 16:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:02.274 16:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.274 16:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.274 16:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.531 16:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.531 16:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:02.531 [2024-11-20 16:10:33.700366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.531 16:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.788 16:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:02.788 16:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.044 16:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:03.044 16:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.320 16:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:03.320 16:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.576 16:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:03.576 16:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:03.576 16:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.833 16:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:03.833 16:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:04.089 16:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:04.089 16:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:04.368 16:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:04.368 16:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:04.666 16:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.666 16:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:04.666 16:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:04.926 16:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:04.926 16:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.183 16:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.183 [2024-11-20 16:10:36.383066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.183 16:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:05.438 16:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:05.694 16:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.063 16:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:07.064 16:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:07.064 16:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.064 16:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:07.064 16:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:07.064 16:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:08.964 16:10:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:08.964 16:10:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:08.964 16:10:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.964 16:10:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:08.964 16:10:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.964 16:10:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:08.964 16:10:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:08.964 [global] 00:09:08.964 thread=1 00:09:08.964 invalidate=1 00:09:08.964 rw=write 00:09:08.964 time_based=1 00:09:08.964 runtime=1 00:09:08.964 ioengine=libaio 00:09:08.964 direct=1 00:09:08.964 bs=4096 00:09:08.964 iodepth=1 00:09:08.964 norandommap=0 00:09:08.964 numjobs=1 00:09:08.964 00:09:08.964 verify_dump=1 00:09:08.964 verify_backlog=512 00:09:08.964 verify_state_save=0 00:09:08.964 do_verify=1 00:09:08.964 verify=crc32c-intel 00:09:08.964 [job0] 00:09:08.964 filename=/dev/nvme0n1 00:09:08.964 [job1] 00:09:08.964 filename=/dev/nvme0n2 00:09:08.964 [job2] 00:09:08.964 filename=/dev/nvme0n3 00:09:08.964 [job3] 00:09:08.964 filename=/dev/nvme0n4 00:09:08.964 Could not set queue depth (nvme0n1) 00:09:08.964 Could not set queue depth (nvme0n2) 00:09:08.964 Could not set queue depth (nvme0n3) 00:09:08.964 Could not set queue depth (nvme0n4) 00:09:09.222 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.222 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.222 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.222 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.222 fio-3.35 00:09:09.222 Starting 4 threads 00:09:10.596 00:09:10.596 job0: (groupid=0, jobs=1): err= 0: pid=1809081: Wed Nov 20 16:10:41 2024 00:09:10.596 read: IOPS=20, BW=81.6KiB/s (83.6kB/s)(84.0KiB/1029msec) 00:09:10.596 slat (nsec): min=8996, max=25922, avg=22314.90, stdev=3142.14 00:09:10.596 clat (usec): min=40659, max=41894, avg=41001.90, stdev=228.09 00:09:10.596 lat (usec): min=40668, max=41917, avg=41024.22, stdev=229.13 00:09:10.596 clat percentiles (usec): 00:09:10.596 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:09:10.596 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:10.596 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:10.596 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:10.596 | 99.99th=[41681] 00:09:10.596 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:09:10.596 slat (usec): min=10, max=40666, avg=126.25, stdev=1963.11 00:09:10.596 clat (usec): min=117, max=343, avg=197.39, stdev=49.26 00:09:10.596 lat (usec): min=128, max=40974, avg=323.64, stdev=1970.80 00:09:10.596 clat percentiles (usec): 00:09:10.596 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 130], 20.00th=[ 143], 00:09:10.596 | 30.00th=[ 155], 40.00th=[ 178], 50.00th=[ 210], 60.00th=[ 239], 00:09:10.596 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:09:10.596 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 343], 99.95th=[ 343], 00:09:10.596 | 99.99th=[ 343] 00:09:10.596 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.596 lat (usec) : 250=93.62%, 500=2.44% 00:09:10.596 lat (msec) : 50=3.94% 00:09:10.596 cpu : usr=0.19%, sys=0.58%, ctx=539, majf=0, minf=1 00:09:10.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.596 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.596 job1: (groupid=0, jobs=1): err= 0: pid=1809082: Wed Nov 20 16:10:41 2024 00:09:10.596 read: IOPS=22, BW=89.1KiB/s (91.2kB/s)(92.0KiB/1033msec) 00:09:10.596 slat (nsec): min=9756, max=24313, avg=21478.57, stdev=2646.57 00:09:10.596 clat (usec): min=40841, max=41298, avg=40984.36, stdev=94.41 00:09:10.596 lat (usec): min=40863, max=41308, avg=41005.83, stdev=92.48 00:09:10.596 clat percentiles (usec): 00:09:10.596 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:10.596 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:10.596 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:10.596 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:10.596 | 99.99th=[41157] 00:09:10.596 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:09:10.596 slat (nsec): min=10095, max=40063, avg=11213.93, stdev=1833.88 00:09:10.596 clat (usec): min=134, max=344, avg=160.24, stdev=15.93 00:09:10.596 lat (usec): min=145, max=384, avg=171.45, stdev=16.73 00:09:10.596 clat percentiles (usec): 00:09:10.596 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:09:10.596 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:09:10.596 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 188], 00:09:10.596 | 99.00th=[ 196], 99.50th=[ 196], 99.90th=[ 347], 99.95th=[ 347], 00:09:10.596 | 99.99th=[ 347] 00:09:10.596 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.596 lat (usec) : 250=95.51%, 500=0.19% 00:09:10.596 lat (msec) : 50=4.30% 00:09:10.596 cpu : usr=0.78%, sys=0.48%, ctx=535, majf=0, minf=1 00:09:10.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.596 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.596 job2: (groupid=0, jobs=1): err= 0: pid=1809084: Wed Nov 20 16:10:41 2024 00:09:10.596 read: IOPS=20, BW=82.9KiB/s (84.9kB/s)(84.0KiB/1013msec) 00:09:10.596 slat (nsec): min=10152, max=24684, avg=22296.10, stdev=2827.16 00:09:10.596 clat (usec): min=40863, max=41137, avg=40973.49, stdev=67.12 00:09:10.596 lat (usec): min=40886, max=41147, avg=40995.78, stdev=65.62 00:09:10.596 clat percentiles (usec): 00:09:10.596 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:10.596 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:10.596 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:10.596 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:10.596 | 99.99th=[41157] 00:09:10.596 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:09:10.596 slat (usec): min=11, max=40587, avg=126.82, stdev=1961.43 00:09:10.596 clat (usec): min=124, max=360, avg=165.80, stdev=23.52 00:09:10.596 lat (usec): min=136, max=40948, avg=292.62, stdev=1972.55 00:09:10.596 clat percentiles (usec): 00:09:10.596 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 145], 20.00th=[ 149], 00:09:10.596 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:09:10.596 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 202], 00:09:10.596 | 99.00th=[ 219], 99.50th=[ 281], 99.90th=[ 363], 99.95th=[ 363], 00:09:10.596 | 99.99th=[ 363] 00:09:10.596 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.596 lat (usec) : 250=95.50%, 500=0.56% 00:09:10.596 lat (msec) : 50=3.94% 00:09:10.596 cpu : usr=0.30%, sys=1.09%, ctx=536, majf=0, minf=1 00:09:10.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.596 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.596 job3: (groupid=0, jobs=1): err= 0: pid=1809085: Wed Nov 20 16:10:41 2024 00:09:10.596 read: IOPS=22, BW=88.7KiB/s (90.8kB/s)(92.0KiB/1037msec) 00:09:10.596 slat (nsec): min=9928, max=25022, avg=21925.57, stdev=2728.01 00:09:10.596 clat (usec): min=40602, max=41124, avg=40954.10, stdev=103.64 00:09:10.596 lat (usec): min=40612, max=41145, avg=40976.03, stdev=105.39 00:09:10.596 clat percentiles (usec): 00:09:10.596 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:10.596 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:10.596 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:10.596 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:10.596 | 99.99th=[41157] 00:09:10.596 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:09:10.596 slat (nsec): min=10395, max=39917, avg=11760.14, stdev=2120.41 00:09:10.596 clat (usec): min=142, max=346, avg=168.68, stdev=19.90 00:09:10.596 lat (usec): min=153, max=385, avg=180.44, stdev=20.54 00:09:10.596 clat percentiles (usec): 00:09:10.596 | 1.00th=[ 145], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:09:10.596 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:09:10.596 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:09:10.596 | 99.00th=[ 217], 99.50th=[ 225], 99.90th=[ 347], 99.95th=[ 347], 00:09:10.596 | 99.99th=[ 347] 00:09:10.596 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.596 lat (usec) : 250=95.51%, 500=0.19% 00:09:10.596 lat (msec) : 50=4.30% 00:09:10.596 cpu : usr=0.39%, sys=0.87%, ctx=535, majf=0, minf=1 00:09:10.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.596 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.596 00:09:10.596 Run status group 0 (all jobs): 00:09:10.596 READ: bw=339KiB/s (348kB/s), 81.6KiB/s-89.1KiB/s (83.6kB/s-91.2kB/s), io=352KiB (360kB), run=1013-1037msec 00:09:10.596 WRITE: bw=7900KiB/s (8089kB/s), 1975KiB/s-2022KiB/s (2022kB/s-2070kB/s), io=8192KiB (8389kB), run=1013-1037msec 00:09:10.596 00:09:10.596 Disk stats (read/write): 00:09:10.596 nvme0n1: ios=38/512, merge=0/0, ticks=1477/99, in_queue=1576, util=87.27% 00:09:10.596 nvme0n2: ios=67/512, merge=0/0, ticks=749/80, in_queue=829, util=85.58% 00:09:10.596 nvme0n3: ios=38/512, merge=0/0, ticks=1519/71, in_queue=1590, util=95.44% 00:09:10.596 nvme0n4: ios=74/512, merge=0/0, ticks=771/76, in_queue=847, util=94.03% 00:09:10.596 16:10:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:10.596 [global] 00:09:10.596 thread=1 00:09:10.596 invalidate=1 00:09:10.597 rw=randwrite 00:09:10.597 time_based=1 00:09:10.597 runtime=1 00:09:10.597 ioengine=libaio 00:09:10.597 direct=1 00:09:10.597 bs=4096 00:09:10.597 iodepth=1 00:09:10.597 norandommap=0 00:09:10.597 numjobs=1 00:09:10.597 00:09:10.597 verify_dump=1 00:09:10.597 verify_backlog=512 00:09:10.597 verify_state_save=0 00:09:10.597 do_verify=1 00:09:10.597 verify=crc32c-intel 00:09:10.597 [job0] 00:09:10.597 filename=/dev/nvme0n1 00:09:10.597 [job1] 00:09:10.597 filename=/dev/nvme0n2 00:09:10.597 [job2] 00:09:10.597 filename=/dev/nvme0n3 00:09:10.597 [job3] 00:09:10.597 filename=/dev/nvme0n4 00:09:10.597 Could not set queue depth (nvme0n1) 00:09:10.597 Could not set queue depth (nvme0n2) 00:09:10.597 Could not set queue depth (nvme0n3) 00:09:10.597 Could not set queue depth (nvme0n4) 00:09:10.855 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.855 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.855 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.855 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.855 fio-3.35 00:09:10.855 Starting 4 threads 00:09:12.229 00:09:12.229 job0: (groupid=0, jobs=1): err= 0: pid=1809459: Wed Nov 20 16:10:43 2024 00:09:12.229 read: IOPS=2621, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:09:12.229 slat (nsec): min=6585, max=27929, avg=7354.91, stdev=990.19 00:09:12.229 clat (usec): min=153, max=1204, avg=191.24, stdev=35.35 00:09:12.229 lat (usec): min=160, max=1211, avg=198.60, stdev=35.41 00:09:12.229 clat percentiles (usec): 00:09:12.229 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:09:12.229 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 188], 00:09:12.229 | 70.00th=[ 200], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 233], 00:09:12.229 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 302], 99.95th=[ 1074], 00:09:12.229 | 99.99th=[ 1205] 00:09:12.229 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:12.229 slat (nsec): min=9286, max=44171, avg=10415.98, stdev=1539.05 00:09:12.229 clat (usec): min=104, max=1155, avg=141.66, stdev=44.71 00:09:12.229 lat (usec): min=120, max=1166, avg=152.08, stdev=44.80 00:09:12.229 clat percentiles (usec): 00:09:12.229 | 1.00th=[ 114], 5.00th=[ 117], 10.00th=[ 119], 20.00th=[ 122], 00:09:12.229 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 133], 00:09:12.229 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 204], 95.00th=[ 221], 00:09:12.229 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 326], 99.95th=[ 1074], 00:09:12.229 | 99.99th=[ 1156] 00:09:12.229 bw ( KiB/s): min=12288, max=12288, per=62.16%, avg=12288.00, stdev= 0.00, samples=1 00:09:12.229 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:12.229 lat (usec) : 250=98.61%, 500=1.30%, 1000=0.02% 00:09:12.229 lat (msec) : 2=0.07% 00:09:12.229 cpu : usr=2.70%, sys=5.20%, ctx=5697, majf=0, minf=1 00:09:12.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.229 issued rwts: total=2624,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.229 job1: (groupid=0, jobs=1): err= 0: pid=1809460: Wed Nov 20 16:10:43 2024 00:09:12.229 read: IOPS=703, BW=2814KiB/s (2882kB/s)(2820KiB/1002msec) 00:09:12.229 slat (nsec): min=6492, max=27769, avg=7731.51, stdev=2563.43 00:09:12.229 clat (usec): min=173, max=42032, avg=1150.44, stdev=6080.43 00:09:12.229 lat (usec): min=181, max=42060, avg=1158.17, stdev=6082.56 00:09:12.229 clat percentiles (usec): 00:09:12.229 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:09:12.229 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 225], 00:09:12.229 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 269], 00:09:12.229 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:12.229 | 99.99th=[42206] 00:09:12.229 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:09:12.229 slat (nsec): min=9146, max=47974, avg=11311.19, stdev=2427.36 00:09:12.229 clat (usec): min=116, max=360, avg=165.75, stdev=37.71 00:09:12.229 lat (usec): min=125, max=396, avg=177.06, stdev=38.94 00:09:12.229 clat percentiles (usec): 00:09:12.229 | 1.00th=[ 123], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 135], 00:09:12.229 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 145], 60.00th=[ 174], 00:09:12.229 | 70.00th=[ 194], 80.00th=[ 206], 90.00th=[ 221], 95.00th=[ 229], 00:09:12.229 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 289], 99.95th=[ 359], 00:09:12.229 | 99.99th=[ 359] 00:09:12.229 bw ( KiB/s): min= 8192, max= 8192, per=41.44%, avg=8192.00, stdev= 0.00, samples=1 00:09:12.229 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:12.229 lat (usec) : 250=96.01%, 500=3.07% 00:09:12.229 lat (msec) : 50=0.93% 00:09:12.229 cpu : usr=0.60%, sys=2.10%, ctx=1729, majf=0, minf=1 00:09:12.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.229 issued rwts: total=705,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.229 job2: (groupid=0, jobs=1): err= 0: pid=1809461: Wed Nov 20 16:10:43 2024 00:09:12.229 read: IOPS=22, BW=89.0KiB/s (91.1kB/s)(92.0KiB/1034msec) 00:09:12.229 slat (nsec): min=8658, max=23491, avg=21204.91, stdev=4291.33 00:09:12.229 clat (usec): min=40825, max=41202, avg=40976.58, stdev=87.29 00:09:12.229 lat (usec): min=40849, max=41216, avg=40997.79, stdev=85.10 00:09:12.229 clat percentiles (usec): 00:09:12.229 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:12.229 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:12.229 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:12.229 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:12.229 | 99.99th=[41157] 00:09:12.229 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:09:12.229 slat (nsec): min=8987, max=40831, avg=10001.26, stdev=1698.97 00:09:12.229 clat (usec): min=134, max=300, avg=165.71, stdev=14.09 00:09:12.229 lat (usec): min=144, max=341, avg=175.71, stdev=14.77 00:09:12.229 clat percentiles (usec): 00:09:12.229 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:09:12.229 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:09:12.229 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 190], 00:09:12.229 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 302], 99.95th=[ 302], 00:09:12.229 | 99.99th=[ 302] 00:09:12.229 bw ( KiB/s): min= 4096, max= 4096, per=20.72%, avg=4096.00, stdev= 0.00, samples=1 00:09:12.229 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:12.229 lat (usec) : 250=95.51%, 500=0.19% 00:09:12.229 lat (msec) : 50=4.30% 00:09:12.229 cpu : usr=0.19%, sys=0.58%, ctx=535, majf=0, minf=1 00:09:12.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.229 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.229 job3: (groupid=0, jobs=1): err= 0: pid=1809462: Wed Nov 20 16:10:43 2024 00:09:12.229 read: IOPS=24, BW=96.5KiB/s (98.8kB/s)(100KiB/1036msec) 00:09:12.229 slat (nsec): min=7700, max=25171, avg=22058.36, stdev=5052.57 00:09:12.229 clat (usec): min=237, max=41959, avg=37743.98, stdev=11263.65 00:09:12.229 lat (usec): min=247, max=41983, avg=37766.04, stdev=11265.11 00:09:12.229 clat percentiles (usec): 00:09:12.229 | 1.00th=[ 239], 5.00th=[ 416], 10.00th=[40633], 20.00th=[40633], 00:09:12.229 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:12.229 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:12.229 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:12.229 | 99.99th=[42206] 00:09:12.229 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:09:12.229 slat (nsec): min=9525, max=37130, avg=10593.98, stdev=1538.74 00:09:12.229 clat (usec): min=142, max=232, avg=166.20, stdev=11.98 00:09:12.229 lat (usec): min=153, max=269, avg=176.79, stdev=12.40 00:09:12.229 clat percentiles (usec): 00:09:12.229 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:09:12.229 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:09:12.229 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:09:12.229 | 99.00th=[ 202], 99.50th=[ 219], 99.90th=[ 233], 99.95th=[ 233], 00:09:12.229 | 99.99th=[ 233] 00:09:12.229 bw ( KiB/s): min= 4096, max= 4096, per=20.72%, avg=4096.00, stdev= 0.00, samples=1 00:09:12.229 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:12.230 lat (usec) : 250=95.53%, 500=0.19% 00:09:12.230 lat (msec) : 50=4.28% 00:09:12.230 cpu : usr=0.19%, sys=0.58%, ctx=538, majf=0, minf=1 00:09:12.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.230 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.230 00:09:12.230 Run status group 0 (all jobs): 00:09:12.230 READ: bw=12.7MiB/s (13.4MB/s), 89.0KiB/s-10.2MiB/s (91.1kB/s-10.7MB/s), io=13.2MiB (13.8MB), run=1001-1036msec 00:09:12.230 WRITE: bw=19.3MiB/s (20.2MB/s), 1977KiB/s-12.0MiB/s (2024kB/s-12.6MB/s), io=20.0MiB (21.0MB), run=1001-1036msec 00:09:12.230 00:09:12.230 Disk stats (read/write): 00:09:12.230 nvme0n1: ios=2273/2560, merge=0/0, ticks=1332/359, in_queue=1691, util=90.28% 00:09:12.230 nvme0n2: ios=751/1024, merge=0/0, ticks=702/164, in_queue=866, util=91.29% 00:09:12.230 nvme0n3: ios=75/512, merge=0/0, ticks=812/82, in_queue=894, util=94.71% 00:09:12.230 nvme0n4: ios=64/512, merge=0/0, ticks=1611/82, in_queue=1693, util=100.00% 00:09:12.230 16:10:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:12.230 [global] 00:09:12.230 thread=1 00:09:12.230 invalidate=1 00:09:12.230 rw=write 00:09:12.230 time_based=1 00:09:12.230 runtime=1 00:09:12.230 ioengine=libaio 00:09:12.230 direct=1 00:09:12.230 bs=4096 00:09:12.230 iodepth=128 00:09:12.230 norandommap=0 00:09:12.230 numjobs=1 00:09:12.230 00:09:12.230 verify_dump=1 00:09:12.230 verify_backlog=512 00:09:12.230 verify_state_save=0 00:09:12.230 do_verify=1 00:09:12.230 verify=crc32c-intel 00:09:12.230 [job0] 00:09:12.230 filename=/dev/nvme0n1 00:09:12.230 [job1] 00:09:12.230 filename=/dev/nvme0n2 00:09:12.230 [job2] 00:09:12.230 filename=/dev/nvme0n3 00:09:12.230 [job3] 00:09:12.230 filename=/dev/nvme0n4 00:09:12.230 Could not set queue depth (nvme0n1) 00:09:12.230 Could not set queue depth (nvme0n2) 00:09:12.230 Could not set queue depth (nvme0n3) 00:09:12.230 Could not set queue depth (nvme0n4) 00:09:12.488 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.488 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.488 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.488 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.488 fio-3.35 00:09:12.488 Starting 4 threads 00:09:13.895 00:09:13.895 job0: (groupid=0, jobs=1): err= 0: pid=1809830: Wed Nov 20 16:10:44 2024 00:09:13.895 read: IOPS=3324, BW=13.0MiB/s (13.6MB/s)(13.1MiB/1009msec) 00:09:13.895 slat (nsec): min=1136, max=10648k, avg=106203.64, stdev=707491.27 00:09:13.895 clat (usec): min=4710, max=33121, avg=13369.40, stdev=3898.55 00:09:13.895 lat (usec): min=4720, max=33132, avg=13475.61, stdev=3949.14 00:09:13.895 clat percentiles (usec): 00:09:13.895 | 1.00th=[ 6325], 5.00th=[ 8160], 10.00th=[ 9503], 20.00th=[10028], 00:09:13.895 | 30.00th=[10552], 40.00th=[11863], 50.00th=[12649], 60.00th=[13566], 00:09:13.895 | 70.00th=[15008], 80.00th=[16909], 90.00th=[18482], 95.00th=[19006], 00:09:13.895 | 99.00th=[27132], 99.50th=[30016], 99.90th=[33162], 99.95th=[33162], 00:09:13.895 | 99.99th=[33162] 00:09:13.895 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:09:13.895 slat (usec): min=2, max=47656, avg=169.03, stdev=1667.99 00:09:13.895 clat (msec): min=2, max=177, avg=19.77, stdev=19.53 00:09:13.895 lat (msec): min=2, max=177, avg=19.94, stdev=19.75 00:09:13.895 clat percentiles (msec): 00:09:13.895 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:09:13.895 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 15], 00:09:13.895 | 70.00th=[ 18], 80.00th=[ 24], 90.00th=[ 42], 95.00th=[ 57], 00:09:13.895 | 99.00th=[ 90], 99.50th=[ 136], 99.90th=[ 178], 99.95th=[ 178], 00:09:13.895 | 99.99th=[ 178] 00:09:13.895 bw ( KiB/s): min=12288, max=16384, per=19.71%, avg=14336.00, stdev=2896.31, samples=2 00:09:13.895 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:13.895 lat (msec) : 4=0.17%, 10=19.47%, 20=64.70%, 50=12.45%, 100=2.74% 00:09:13.895 lat (msec) : 250=0.46% 00:09:13.895 cpu : usr=1.69%, sys=5.75%, ctx=352, majf=0, minf=1 00:09:13.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:13.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.895 issued rwts: total=3354,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.895 job1: (groupid=0, jobs=1): err= 0: pid=1809831: Wed Nov 20 16:10:44 2024 00:09:13.895 read: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec) 00:09:13.895 slat (nsec): min=1343, max=13070k, avg=86302.72, stdev=656844.29 00:09:13.895 clat (usec): min=3360, max=30878, avg=11106.26, stdev=3488.21 00:09:13.895 lat (usec): min=3366, max=30901, avg=11192.56, stdev=3536.87 00:09:13.895 clat percentiles (usec): 00:09:13.895 | 1.00th=[ 4424], 5.00th=[ 7177], 10.00th=[ 8225], 20.00th=[ 8979], 00:09:13.895 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10421], 00:09:13.895 | 70.00th=[11469], 80.00th=[13435], 90.00th=[17433], 95.00th=[18744], 00:09:13.895 | 99.00th=[20579], 99.50th=[20579], 99.90th=[23462], 99.95th=[25297], 00:09:13.895 | 99.99th=[30802] 00:09:13.895 write: IOPS=6326, BW=24.7MiB/s (25.9MB/s)(25.0MiB/1010msec); 0 zone resets 00:09:13.895 slat (usec): min=2, max=7328, avg=53.11, stdev=243.77 00:09:13.895 clat (usec): min=1076, max=50267, avg=9389.95, stdev=5320.62 00:09:13.895 lat (usec): min=1087, max=50276, avg=9443.06, stdev=5337.30 00:09:13.895 clat percentiles (usec): 00:09:13.895 | 1.00th=[ 2245], 5.00th=[ 3621], 10.00th=[ 4817], 20.00th=[ 5800], 00:09:13.895 | 30.00th=[ 6980], 40.00th=[ 7963], 50.00th=[ 9241], 60.00th=[ 9634], 00:09:13.895 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[13173], 95.00th=[18744], 00:09:13.895 | 99.00th=[33424], 99.50th=[40633], 99.90th=[45876], 99.95th=[49546], 00:09:13.895 | 99.99th=[50070] 00:09:13.895 bw ( KiB/s): min=24304, max=25800, per=34.45%, avg=25052.00, stdev=1057.83, samples=2 00:09:13.895 iops : min= 6076, max= 6450, avg=6263.00, stdev=264.46, samples=2 00:09:13.895 lat (msec) : 2=0.26%, 4=3.30%, 10=62.55%, 20=30.23%, 50=3.65% 00:09:13.895 lat (msec) : 100=0.01% 00:09:13.895 cpu : usr=5.05%, sys=6.05%, ctx=691, majf=0, minf=2 00:09:13.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:13.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.895 issued rwts: total=6144,6390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.895 job2: (groupid=0, jobs=1): err= 0: pid=1809832: Wed Nov 20 16:10:44 2024 00:09:13.895 read: IOPS=3416, BW=13.3MiB/s (14.0MB/s)(13.5MiB/1010msec) 00:09:13.895 slat (nsec): min=1183, max=34255k, avg=142394.41, stdev=1364517.32 00:09:13.895 clat (usec): min=777, max=71265, avg=20103.53, stdev=12409.79 00:09:13.895 lat (usec): min=5351, max=71288, avg=20245.93, stdev=12518.77 00:09:13.895 clat percentiles (usec): 00:09:13.895 | 1.00th=[ 7177], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10683], 00:09:13.895 | 30.00th=[11994], 40.00th=[13566], 50.00th=[15008], 60.00th=[15664], 00:09:13.895 | 70.00th=[22152], 80.00th=[31065], 90.00th=[39060], 95.00th=[46924], 00:09:13.895 | 99.00th=[56361], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:09:13.895 | 99.99th=[70779] 00:09:13.895 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:09:13.895 slat (nsec): min=1966, max=19347k, avg=120291.59, stdev=885998.46 00:09:13.895 clat (usec): min=698, max=61488, avg=16389.68, stdev=10113.15 00:09:13.895 lat (usec): min=711, max=61491, avg=16509.97, stdev=10191.91 00:09:13.895 clat percentiles (usec): 00:09:13.895 | 1.00th=[ 1991], 5.00th=[ 5669], 10.00th=[ 7963], 20.00th=[ 9896], 00:09:13.895 | 30.00th=[10945], 40.00th=[12518], 50.00th=[14091], 60.00th=[15270], 00:09:13.895 | 70.00th=[18744], 80.00th=[21365], 90.00th=[25297], 95.00th=[34866], 00:09:13.895 | 99.00th=[61080], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:09:13.895 | 99.99th=[61604] 00:09:13.895 bw ( KiB/s): min=12280, max=16392, per=19.71%, avg=14336.00, stdev=2907.62, samples=2 00:09:13.895 iops : min= 3070, max= 4098, avg=3584.00, stdev=726.91, samples=2 00:09:13.895 lat (usec) : 750=0.10%, 1000=0.01% 00:09:13.895 lat (msec) : 2=0.41%, 4=0.85%, 10=15.07%, 20=54.43%, 50=26.17% 00:09:13.895 lat (msec) : 100=2.96% 00:09:13.895 cpu : usr=1.98%, sys=3.96%, ctx=333, majf=0, minf=1 00:09:13.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:13.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.896 issued rwts: total=3451,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.896 job3: (groupid=0, jobs=1): err= 0: pid=1809834: Wed Nov 20 16:10:44 2024 00:09:13.896 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:09:13.896 slat (nsec): min=1277, max=14611k, avg=106106.31, stdev=770032.57 00:09:13.896 clat (usec): min=3933, max=41248, avg=12988.90, stdev=4458.87 00:09:13.896 lat (usec): min=3942, max=41274, avg=13095.01, stdev=4515.40 00:09:13.896 clat percentiles (usec): 00:09:13.896 | 1.00th=[ 5014], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[10421], 00:09:13.896 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11338], 60.00th=[11863], 00:09:13.896 | 70.00th=[14091], 80.00th=[15926], 90.00th=[18744], 95.00th=[21627], 00:09:13.896 | 99.00th=[29754], 99.50th=[29754], 99.90th=[30016], 99.95th=[34341], 00:09:13.896 | 99.99th=[41157] 00:09:13.896 write: IOPS=4769, BW=18.6MiB/s (19.5MB/s)(18.8MiB/1011msec); 0 zone resets 00:09:13.896 slat (usec): min=2, max=14473, avg=100.44, stdev=560.82 00:09:13.896 clat (usec): min=1503, max=51619, avg=14179.69, stdev=8947.95 00:09:13.896 lat (usec): min=1541, max=51631, avg=14280.13, stdev=8992.84 00:09:13.896 clat percentiles (usec): 00:09:13.896 | 1.00th=[ 3261], 5.00th=[ 5407], 10.00th=[ 7046], 20.00th=[ 9896], 00:09:13.896 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:09:13.896 | 70.00th=[13304], 80.00th=[17171], 90.00th=[22152], 95.00th=[40633], 00:09:13.896 | 99.00th=[51119], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:09:13.896 | 99.99th=[51643] 00:09:13.896 bw ( KiB/s): min=16384, max=21176, per=25.83%, avg=18780.00, stdev=3388.46, samples=2 00:09:13.896 iops : min= 4096, max= 5294, avg=4695.00, stdev=847.11, samples=2 00:09:13.896 lat (msec) : 2=0.10%, 4=1.09%, 10=16.55%, 20=72.21%, 50=9.38% 00:09:13.896 lat (msec) : 100=0.67% 00:09:13.896 cpu : usr=3.17%, sys=5.64%, ctx=604, majf=0, minf=1 00:09:13.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:13.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.896 issued rwts: total=4608,4822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.896 00:09:13.896 Run status group 0 (all jobs): 00:09:13.896 READ: bw=67.8MiB/s (71.1MB/s), 13.0MiB/s-23.8MiB/s (13.6MB/s-24.9MB/s), io=68.6MiB (71.9MB), run=1009-1011msec 00:09:13.896 WRITE: bw=71.0MiB/s (74.5MB/s), 13.9MiB/s-24.7MiB/s (14.5MB/s-25.9MB/s), io=71.8MiB (75.3MB), run=1009-1011msec 00:09:13.896 00:09:13.896 Disk stats (read/write): 00:09:13.896 nvme0n1: ios=2589/2688, merge=0/0, ticks=28203/41122, in_queue=69325, util=96.29% 00:09:13.896 nvme0n2: ios=5286/5632, merge=0/0, ticks=55971/48568, in_queue=104539, util=88.21% 00:09:13.896 nvme0n3: ios=3027/3072, merge=0/0, ticks=37904/35104, in_queue=73008, util=91.05% 00:09:13.896 nvme0n4: ios=3641/4096, merge=0/0, ticks=45807/59136, in_queue=104943, util=95.39% 00:09:13.896 16:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:13.896 [global] 00:09:13.896 thread=1 00:09:13.896 invalidate=1 00:09:13.896 rw=randwrite 00:09:13.896 time_based=1 00:09:13.896 runtime=1 00:09:13.896 ioengine=libaio 00:09:13.896 direct=1 00:09:13.896 bs=4096 00:09:13.896 iodepth=128 00:09:13.896 norandommap=0 00:09:13.896 numjobs=1 00:09:13.896 00:09:13.896 verify_dump=1 00:09:13.896 verify_backlog=512 00:09:13.896 verify_state_save=0 00:09:13.896 do_verify=1 00:09:13.896 verify=crc32c-intel 00:09:13.896 [job0] 00:09:13.896 filename=/dev/nvme0n1 00:09:13.896 [job1] 00:09:13.896 filename=/dev/nvme0n2 00:09:13.896 [job2] 00:09:13.896 filename=/dev/nvme0n3 00:09:13.896 [job3] 00:09:13.896 filename=/dev/nvme0n4 00:09:13.896 Could not set queue depth (nvme0n1) 00:09:13.896 Could not set queue depth (nvme0n2) 00:09:13.896 Could not set queue depth (nvme0n3) 00:09:13.896 Could not set queue depth (nvme0n4) 00:09:14.154 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:14.154 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:14.154 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:14.154 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:14.154 fio-3.35 00:09:14.154 Starting 4 threads 00:09:15.529 00:09:15.529 job0: (groupid=0, jobs=1): err= 0: pid=1810218: Wed Nov 20 16:10:46 2024 00:09:15.529 read: IOPS=5808, BW=22.7MiB/s (23.8MB/s)(22.8MiB/1006msec) 00:09:15.529 slat (nsec): min=1251, max=10437k, avg=84182.03, stdev=602744.44 00:09:15.529 clat (usec): min=3515, max=37406, avg=10883.75, stdev=3168.08 00:09:15.529 lat (usec): min=3522, max=37414, avg=10967.93, stdev=3194.85 00:09:15.529 clat percentiles (usec): 00:09:15.529 | 1.00th=[ 4752], 5.00th=[ 6521], 10.00th=[ 8717], 20.00th=[ 9241], 00:09:15.529 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10421], 00:09:15.529 | 70.00th=[11207], 80.00th=[12518], 90.00th=[15139], 95.00th=[16909], 00:09:15.529 | 99.00th=[18744], 99.50th=[24511], 99.90th=[36963], 99.95th=[37487], 00:09:15.529 | 99.99th=[37487] 00:09:15.529 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:09:15.529 slat (nsec): min=1919, max=12657k, avg=73907.55, stdev=427456.31 00:09:15.529 clat (usec): min=938, max=34457, avg=10419.37, stdev=3562.82 00:09:15.529 lat (usec): min=965, max=34465, avg=10493.28, stdev=3602.24 00:09:15.529 clat percentiles (usec): 00:09:15.529 | 1.00th=[ 2868], 5.00th=[ 4948], 10.00th=[ 6980], 20.00th=[ 8717], 00:09:15.529 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10290], 00:09:15.529 | 70.00th=[10421], 80.00th=[11600], 90.00th=[14353], 95.00th=[18482], 00:09:15.529 | 99.00th=[23200], 99.50th=[23987], 99.90th=[30802], 99.95th=[30802], 00:09:15.529 | 99.99th=[34341] 00:09:15.529 bw ( KiB/s): min=23672, max=25432, per=32.96%, avg=24552.00, stdev=1244.51, samples=2 00:09:15.529 iops : min= 5918, max= 6358, avg=6138.00, stdev=311.13, samples=2 00:09:15.529 lat (usec) : 1000=0.02% 00:09:15.529 lat (msec) : 4=1.83%, 10=47.88%, 20=48.79%, 50=1.48% 00:09:15.529 cpu : usr=3.58%, sys=6.37%, ctx=691, majf=0, minf=1 00:09:15.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:15.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.530 issued rwts: total=5843,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.530 job1: (groupid=0, jobs=1): err= 0: pid=1810227: Wed Nov 20 16:10:46 2024 00:09:15.530 read: IOPS=3901, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1005msec) 00:09:15.530 slat (nsec): min=1092, max=13106k, avg=103931.41, stdev=584900.75 00:09:15.530 clat (usec): min=3057, max=57972, avg=13825.94, stdev=5562.58 00:09:15.530 lat (usec): min=5667, max=57979, avg=13929.87, stdev=5597.61 00:09:15.530 clat percentiles (usec): 00:09:15.530 | 1.00th=[ 7373], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10028], 00:09:15.530 | 30.00th=[10552], 40.00th=[11076], 50.00th=[13304], 60.00th=[13960], 00:09:15.530 | 70.00th=[15270], 80.00th=[16450], 90.00th=[17957], 95.00th=[20317], 00:09:15.530 | 99.00th=[42730], 99.50th=[42730], 99.90th=[50594], 99.95th=[50594], 00:09:15.530 | 99.99th=[57934] 00:09:15.530 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:15.530 slat (usec): min=2, max=23284, avg=139.23, stdev=958.88 00:09:15.530 clat (usec): min=4893, max=59876, avg=17515.83, stdev=11380.04 00:09:15.530 lat (usec): min=4918, max=59908, avg=17655.06, stdev=11465.04 00:09:15.530 clat percentiles (usec): 00:09:15.530 | 1.00th=[ 7046], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:09:15.530 | 30.00th=[10552], 40.00th=[11207], 50.00th=[12911], 60.00th=[13829], 00:09:15.530 | 70.00th=[18744], 80.00th=[22676], 90.00th=[35390], 95.00th=[42206], 00:09:15.530 | 99.00th=[59507], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:09:15.530 | 99.99th=[60031] 00:09:15.530 bw ( KiB/s): min=12288, max=20439, per=21.97%, avg=16363.50, stdev=5763.63, samples=2 00:09:15.530 iops : min= 3072, max= 5109, avg=4090.50, stdev=1440.38, samples=2 00:09:15.530 lat (msec) : 4=0.01%, 10=23.10%, 20=59.96%, 50=15.49%, 100=1.43% 00:09:15.530 cpu : usr=2.99%, sys=4.58%, ctx=340, majf=0, minf=1 00:09:15.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:15.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.530 issued rwts: total=3921,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.530 job2: (groupid=0, jobs=1): err= 0: pid=1810239: Wed Nov 20 16:10:46 2024 00:09:15.530 read: IOPS=4748, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1002msec) 00:09:15.530 slat (nsec): min=1292, max=12741k, avg=103879.14, stdev=670264.55 00:09:15.530 clat (usec): min=1712, max=50273, avg=13618.07, stdev=5698.02 00:09:15.530 lat (usec): min=1715, max=50300, avg=13721.95, stdev=5744.51 00:09:15.530 clat percentiles (usec): 00:09:15.530 | 1.00th=[ 5669], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10814], 00:09:15.530 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[12125], 00:09:15.530 | 70.00th=[13042], 80.00th=[16319], 90.00th=[19006], 95.00th=[22676], 00:09:15.530 | 99.00th=[40109], 99.50th=[42730], 99.90th=[47449], 99.95th=[47449], 00:09:15.530 | 99.99th=[50070] 00:09:15.530 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:09:15.530 slat (nsec): min=1833, max=14564k, avg=91828.07, stdev=540448.61 00:09:15.530 clat (usec): min=558, max=30547, avg=12179.92, stdev=2933.33 00:09:15.530 lat (usec): min=568, max=30577, avg=12271.75, stdev=2982.95 00:09:15.530 clat percentiles (usec): 00:09:15.530 | 1.00th=[ 4817], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[10814], 00:09:15.530 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:09:15.530 | 70.00th=[12387], 80.00th=[13042], 90.00th=[15664], 95.00th=[16909], 00:09:15.530 | 99.00th=[23200], 99.50th=[25560], 99.90th=[28705], 99.95th=[28705], 00:09:15.530 | 99.99th=[30540] 00:09:15.530 bw ( KiB/s): min=16888, max=24072, per=27.49%, avg=20480.00, stdev=5079.86, samples=2 00:09:15.530 iops : min= 4222, max= 6018, avg=5120.00, stdev=1269.96, samples=2 00:09:15.530 lat (usec) : 750=0.05% 00:09:15.530 lat (msec) : 2=0.15%, 4=0.15%, 10=8.96%, 20=85.08%, 50=5.60% 00:09:15.530 lat (msec) : 100=0.01% 00:09:15.530 cpu : usr=2.90%, sys=5.49%, ctx=537, majf=0, minf=2 00:09:15.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:15.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.530 issued rwts: total=4758,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.530 job3: (groupid=0, jobs=1): err= 0: pid=1810246: Wed Nov 20 16:10:46 2024 00:09:15.530 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:09:15.530 slat (nsec): min=1284, max=15481k, avg=136218.04, stdev=943690.46 00:09:15.530 clat (usec): min=5934, max=47503, avg=15557.80, stdev=5819.85 00:09:15.530 lat (usec): min=5945, max=47512, avg=15694.02, stdev=5904.70 00:09:15.530 clat percentiles (usec): 00:09:15.530 | 1.00th=[ 7832], 5.00th=[11076], 10.00th=[11338], 20.00th=[11863], 00:09:15.530 | 30.00th=[12125], 40.00th=[13435], 50.00th=[14222], 60.00th=[14877], 00:09:15.530 | 70.00th=[15270], 80.00th=[17171], 90.00th=[22414], 95.00th=[28181], 00:09:15.530 | 99.00th=[39584], 99.50th=[44303], 99.90th=[47449], 99.95th=[47449], 00:09:15.530 | 99.99th=[47449] 00:09:15.530 write: IOPS=3399, BW=13.3MiB/s (13.9MB/s)(13.4MiB/1009msec); 0 zone resets 00:09:15.530 slat (nsec): min=1995, max=9867.0k, avg=164410.50, stdev=713293.06 00:09:15.530 clat (usec): min=3038, max=63172, avg=23392.08, stdev=12088.41 00:09:15.530 lat (usec): min=3046, max=63183, avg=23556.49, stdev=12172.96 00:09:15.530 clat percentiles (usec): 00:09:15.530 | 1.00th=[ 5604], 5.00th=[ 7701], 10.00th=[10159], 20.00th=[11863], 00:09:15.530 | 30.00th=[13960], 40.00th=[17695], 50.00th=[22414], 60.00th=[26346], 00:09:15.530 | 70.00th=[30016], 80.00th=[33162], 90.00th=[37487], 95.00th=[49021], 00:09:15.530 | 99.00th=[54789], 99.50th=[56886], 99.90th=[56886], 99.95th=[63177], 00:09:15.530 | 99.99th=[63177] 00:09:15.530 bw ( KiB/s): min=12424, max=14000, per=17.74%, avg=13212.00, stdev=1114.40, samples=2 00:09:15.530 iops : min= 3106, max= 3500, avg=3303.00, stdev=278.60, samples=2 00:09:15.530 lat (msec) : 4=0.18%, 10=5.03%, 20=57.80%, 50=35.13%, 100=1.86% 00:09:15.530 cpu : usr=2.58%, sys=3.37%, ctx=402, majf=0, minf=1 00:09:15.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:15.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.530 issued rwts: total=3072,3430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.530 00:09:15.530 Run status group 0 (all jobs): 00:09:15.530 READ: bw=68.1MiB/s (71.4MB/s), 11.9MiB/s-22.7MiB/s (12.5MB/s-23.8MB/s), io=68.7MiB (72.1MB), run=1002-1009msec 00:09:15.530 WRITE: bw=72.7MiB/s (76.3MB/s), 13.3MiB/s-23.9MiB/s (13.9MB/s-25.0MB/s), io=73.4MiB (77.0MB), run=1002-1009msec 00:09:15.530 00:09:15.530 Disk stats (read/write): 00:09:15.530 nvme0n1: ios=4901/5120, merge=0/0, ticks=44361/44592, in_queue=88953, util=86.47% 00:09:15.530 nvme0n2: ios=3504/3584, merge=0/0, ticks=19099/24652, in_queue=43751, util=93.60% 00:09:15.530 nvme0n3: ios=4140/4096, merge=0/0, ticks=27587/24644, in_queue=52231, util=94.15% 00:09:15.530 nvme0n4: ios=2560/2903, merge=0/0, ticks=36952/67052, in_queue=104004, util=89.58% 00:09:15.530 16:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:15.530 16:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1810442 00:09:15.530 16:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:15.530 16:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:15.530 [global] 00:09:15.530 thread=1 00:09:15.530 invalidate=1 00:09:15.530 rw=read 00:09:15.530 time_based=1 00:09:15.530 runtime=10 00:09:15.530 ioengine=libaio 00:09:15.530 direct=1 00:09:15.530 bs=4096 00:09:15.530 iodepth=1 00:09:15.530 norandommap=1 00:09:15.530 numjobs=1 00:09:15.530 00:09:15.530 [job0] 00:09:15.530 filename=/dev/nvme0n1 00:09:15.530 [job1] 00:09:15.530 filename=/dev/nvme0n2 00:09:15.530 [job2] 00:09:15.530 filename=/dev/nvme0n3 00:09:15.530 [job3] 00:09:15.530 filename=/dev/nvme0n4 00:09:15.530 Could not set queue depth (nvme0n1) 00:09:15.530 Could not set queue depth (nvme0n2) 00:09:15.530 Could not set queue depth (nvme0n3) 00:09:15.530 Could not set queue depth (nvme0n4) 00:09:15.530 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.530 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.530 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.530 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.530 fio-3.35 00:09:15.530 Starting 4 threads 00:09:18.813 16:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:18.813 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=49307648, buflen=4096 00:09:18.813 fio: pid=1810718, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:18.813 16:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:18.813 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=41984000, buflen=4096 00:09:18.813 fio: pid=1810712, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:18.813 16:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:18.813 16:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:19.072 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=51212288, buflen=4096 00:09:19.072 fio: pid=1810690, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:19.072 16:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.072 16:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:19.072 16:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.072 16:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:19.332 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14147584, buflen=4096 00:09:19.332 fio: pid=1810696, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:19.332 00:09:19.332 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1810690: Wed Nov 20 16:10:50 2024 00:09:19.332 read: IOPS=3941, BW=15.4MiB/s (16.1MB/s)(48.8MiB/3172msec) 00:09:19.332 slat (usec): min=6, max=32764, avg=10.67, stdev=300.75 00:09:19.332 clat (usec): min=157, max=41875, avg=240.17, stdev=967.95 00:09:19.332 lat (usec): min=165, max=41897, avg=250.85, stdev=1014.55 00:09:19.332 clat percentiles (usec): 00:09:19.332 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 200], 00:09:19.332 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:09:19.332 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 249], 00:09:19.332 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 652], 99.95th=[40633], 00:09:19.332 | 99.99th=[41157] 00:09:19.332 bw ( KiB/s): min= 7416, max=17920, per=35.29%, avg=15819.00, stdev=4133.54, samples=6 00:09:19.332 iops : min= 1854, max= 4480, avg=3954.67, stdev=1033.34, samples=6 00:09:19.332 lat (usec) : 250=95.83%, 500=4.05%, 750=0.03%, 1000=0.01% 00:09:19.332 lat (msec) : 2=0.01%, 4=0.01%, 50=0.06% 00:09:19.332 cpu : usr=0.73%, sys=3.88%, ctx=12507, majf=0, minf=2 00:09:19.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.332 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.332 issued rwts: total=12504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.332 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1810696: Wed Nov 20 16:10:50 2024 00:09:19.332 read: IOPS=1012, BW=4048KiB/s (4145kB/s)(13.5MiB/3413msec) 00:09:19.332 slat (nsec): min=5455, max=75556, avg=7603.61, stdev=2573.89 00:09:19.332 clat (usec): min=158, max=44982, avg=972.39, stdev=5498.12 00:09:19.332 lat (usec): min=165, max=45005, avg=979.99, stdev=5500.14 00:09:19.332 clat percentiles (usec): 00:09:19.332 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 192], 00:09:19.332 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 221], 00:09:19.332 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 260], 95.00th=[ 306], 00:09:19.332 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:19.332 | 99.99th=[44827] 00:09:19.332 bw ( KiB/s): min= 96, max=18096, per=10.22%, avg=4580.83, stdev=7397.71, samples=6 00:09:19.332 iops : min= 24, max= 4524, avg=1145.17, stdev=1849.46, samples=6 00:09:19.332 lat (usec) : 250=86.60%, 500=11.43%, 750=0.09% 00:09:19.332 lat (msec) : 50=1.85% 00:09:19.332 cpu : usr=0.18%, sys=1.03%, ctx=3456, majf=0, minf=1 00:09:19.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.332 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.332 issued rwts: total=3455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.332 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1810712: Wed Nov 20 16:10:50 2024 00:09:19.332 read: IOPS=3463, BW=13.5MiB/s (14.2MB/s)(40.0MiB/2960msec) 00:09:19.332 slat (nsec): min=6095, max=37172, avg=8349.06, stdev=1255.25 00:09:19.332 clat (usec): min=188, max=40993, avg=276.93, stdev=1011.15 00:09:19.332 lat (usec): min=196, max=41017, avg=285.28, stdev=1011.50 00:09:19.332 clat percentiles (usec): 00:09:19.332 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:09:19.332 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:09:19.332 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 277], 00:09:19.332 | 99.00th=[ 310], 99.50th=[ 371], 99.90th=[ 502], 99.95th=[40633], 00:09:19.332 | 99.99th=[41157] 00:09:19.332 bw ( KiB/s): min= 6968, max=15424, per=30.58%, avg=13708.80, stdev=3768.38, samples=5 00:09:19.332 iops : min= 1742, max= 3856, avg=3427.20, stdev=942.10, samples=5 00:09:19.332 lat (usec) : 250=52.76%, 500=47.11%, 750=0.06% 00:09:19.332 lat (msec) : 50=0.07% 00:09:19.332 cpu : usr=1.01%, sys=3.48%, ctx=10255, majf=0, minf=2 00:09:19.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.332 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.332 issued rwts: total=10251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.332 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1810718: Wed Nov 20 16:10:50 2024 00:09:19.332 read: IOPS=4417, BW=17.3MiB/s (18.1MB/s)(47.0MiB/2725msec) 00:09:19.332 slat (nsec): min=6418, max=31849, avg=7744.99, stdev=938.65 00:09:19.332 clat (usec): min=164, max=2126, avg=215.63, stdev=37.30 00:09:19.332 lat (usec): min=171, max=2140, avg=223.37, stdev=37.37 00:09:19.332 clat percentiles (usec): 00:09:19.332 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 194], 00:09:19.332 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:09:19.332 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 253], 95.00th=[ 269], 00:09:19.332 | 99.00th=[ 289], 99.50th=[ 322], 99.90th=[ 510], 99.95th=[ 553], 00:09:19.332 | 99.99th=[ 1631] 00:09:19.332 bw ( KiB/s): min=16008, max=19112, per=39.59%, avg=17745.60, stdev=1382.21, samples=5 00:09:19.332 iops : min= 4002, max= 4778, avg=4436.40, stdev=345.55, samples=5 00:09:19.332 lat (usec) : 250=88.25%, 500=11.63%, 750=0.08%, 1000=0.02% 00:09:19.332 lat (msec) : 2=0.01%, 4=0.01% 00:09:19.332 cpu : usr=1.32%, sys=4.04%, ctx=12041, majf=0, minf=2 00:09:19.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.332 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.332 issued rwts: total=12039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.332 00:09:19.332 Run status group 0 (all jobs): 00:09:19.332 READ: bw=43.8MiB/s (45.9MB/s), 4048KiB/s-17.3MiB/s (4145kB/s-18.1MB/s), io=149MiB (157MB), run=2725-3413msec 00:09:19.332 00:09:19.332 Disk stats (read/write): 00:09:19.332 nvme0n1: ios=12310/0, merge=0/0, ticks=2890/0, in_queue=2890, util=94.45% 00:09:19.332 nvme0n2: ios=3453/0, merge=0/0, ticks=3304/0, in_queue=3304, util=96.35% 00:09:19.332 nvme0n3: ios=9966/0, merge=0/0, ticks=3490/0, in_queue=3490, util=99.22% 00:09:19.332 nvme0n4: ios=11604/0, merge=0/0, ticks=3370/0, in_queue=3370, util=99.22% 00:09:19.332 16:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.332 16:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:19.591 16:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.591 16:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:19.850 16:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.850 16:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:20.109 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:20.109 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:20.109 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:20.109 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1810442 00:09:20.109 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:20.109 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.369 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.369 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:20.369 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:20.369 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.369 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:20.369 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.369 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:20.369 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:20.369 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:20.369 nvmf hotplug test: fio failed as expected 00:09:20.369 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.628 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:20.628 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:20.628 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:20.628 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:20.628 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:20.628 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:20.628 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:20.628 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:20.628 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:20.629 rmmod nvme_tcp 00:09:20.629 rmmod nvme_fabrics 00:09:20.629 rmmod nvme_keyring 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1807501 ']' 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1807501 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1807501 ']' 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1807501 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1807501 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1807501' 00:09:20.629 killing process with pid 1807501 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1807501 00:09:20.629 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1807501 00:09:20.888 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:20.888 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:20.888 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:20.888 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:20.888 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:20.888 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:20.888 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:20.888 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:20.888 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:20.888 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.888 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.888 16:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:23.451 00:09:23.451 real 0m27.671s 00:09:23.451 user 1m49.273s 00:09:23.451 sys 0m8.961s 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.451 ************************************ 00:09:23.451 END TEST nvmf_fio_target 00:09:23.451 ************************************ 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.451 ************************************ 00:09:23.451 START TEST nvmf_bdevio 00:09:23.451 ************************************ 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:23.451 * Looking for test storage... 00:09:23.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.451 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:23.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.452 --rc genhtml_branch_coverage=1 00:09:23.452 --rc genhtml_function_coverage=1 00:09:23.452 --rc genhtml_legend=1 00:09:23.452 --rc geninfo_all_blocks=1 00:09:23.452 --rc geninfo_unexecuted_blocks=1 00:09:23.452 00:09:23.452 ' 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:23.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.452 --rc genhtml_branch_coverage=1 00:09:23.452 --rc genhtml_function_coverage=1 00:09:23.452 --rc genhtml_legend=1 00:09:23.452 --rc geninfo_all_blocks=1 00:09:23.452 --rc geninfo_unexecuted_blocks=1 00:09:23.452 00:09:23.452 ' 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:23.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.452 --rc genhtml_branch_coverage=1 00:09:23.452 --rc genhtml_function_coverage=1 00:09:23.452 --rc genhtml_legend=1 00:09:23.452 --rc geninfo_all_blocks=1 00:09:23.452 --rc geninfo_unexecuted_blocks=1 00:09:23.452 00:09:23.452 ' 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:23.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.452 --rc genhtml_branch_coverage=1 00:09:23.452 --rc genhtml_function_coverage=1 00:09:23.452 --rc genhtml_legend=1 00:09:23.452 --rc geninfo_all_blocks=1 00:09:23.452 --rc geninfo_unexecuted_blocks=1 00:09:23.452 00:09:23.452 ' 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:23.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:23.452 16:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.026 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.026 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:30.026 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:30.026 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:30.026 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:30.027 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:30.027 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:30.027 Found net devices under 0000:86:00.0: cvl_0_0 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:30.027 Found net devices under 0000:86:00.1: cvl_0_1 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:30.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:09:30.027 00:09:30.027 --- 10.0.0.2 ping statistics --- 00:09:30.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.027 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:09:30.027 00:09:30.027 --- 10.0.0.1 ping statistics --- 00:09:30.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.027 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:30.027 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1815057 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1815057 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1815057 ']' 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.028 [2024-11-20 16:11:00.419116] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:09:30.028 [2024-11-20 16:11:00.419159] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.028 [2024-11-20 16:11:00.495877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.028 [2024-11-20 16:11:00.537503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.028 [2024-11-20 16:11:00.537540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.028 [2024-11-20 16:11:00.537547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.028 [2024-11-20 16:11:00.537553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.028 [2024-11-20 16:11:00.537558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.028 [2024-11-20 16:11:00.539108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:30.028 [2024-11-20 16:11:00.539234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:30.028 [2024-11-20 16:11:00.539347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.028 [2024-11-20 16:11:00.539348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.028 [2024-11-20 16:11:00.680623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.028 Malloc0 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.028 [2024-11-20 16:11:00.740861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:30.028 { 00:09:30.028 "params": { 00:09:30.028 "name": "Nvme$subsystem", 00:09:30.028 "trtype": "$TEST_TRANSPORT", 00:09:30.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.028 "adrfam": "ipv4", 00:09:30.028 "trsvcid": "$NVMF_PORT", 00:09:30.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.028 "hdgst": ${hdgst:-false}, 00:09:30.028 "ddgst": ${ddgst:-false} 00:09:30.028 }, 00:09:30.028 "method": "bdev_nvme_attach_controller" 00:09:30.028 } 00:09:30.028 EOF 00:09:30.028 )") 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:30.028 16:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:30.028 "params": { 00:09:30.028 "name": "Nvme1", 00:09:30.028 "trtype": "tcp", 00:09:30.028 "traddr": "10.0.0.2", 00:09:30.028 "adrfam": "ipv4", 00:09:30.028 "trsvcid": "4420", 00:09:30.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.028 "hdgst": false, 00:09:30.028 "ddgst": false 00:09:30.028 }, 00:09:30.028 "method": "bdev_nvme_attach_controller" 00:09:30.028 }' 00:09:30.028 [2024-11-20 16:11:00.793559] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:09:30.028 [2024-11-20 16:11:00.793609] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815130 ] 00:09:30.028 [2024-11-20 16:11:00.872118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:30.028 [2024-11-20 16:11:00.916158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.028 [2024-11-20 16:11:00.916267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.028 [2024-11-20 16:11:00.916267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.028 I/O targets: 00:09:30.028 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:30.028 00:09:30.028 00:09:30.028 CUnit - A unit testing framework for C - Version 2.1-3 00:09:30.028 http://cunit.sourceforge.net/ 00:09:30.028 00:09:30.028 00:09:30.028 Suite: bdevio tests on: Nvme1n1 00:09:30.287 Test: blockdev write read block ...passed 00:09:30.287 Test: blockdev write zeroes read block ...passed 00:09:30.287 Test: blockdev write zeroes read no split ...passed 00:09:30.287 Test: blockdev write zeroes read split ...passed 00:09:30.287 Test: blockdev write zeroes read split partial ...passed 00:09:30.287 Test: blockdev reset ...[2024-11-20 16:11:01.396540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:30.287 [2024-11-20 16:11:01.396600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183d340 (9): Bad file descriptor 00:09:30.287 [2024-11-20 16:11:01.408875] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:30.287 passed 00:09:30.287 Test: blockdev write read 8 blocks ...passed 00:09:30.287 Test: blockdev write read size > 128k ...passed 00:09:30.287 Test: blockdev write read invalid size ...passed 00:09:30.287 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.287 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.287 Test: blockdev write read max offset ...passed 00:09:30.546 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.546 Test: blockdev writev readv 8 blocks ...passed 00:09:30.546 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.546 Test: blockdev writev readv block ...passed 00:09:30.546 Test: blockdev writev readv size > 128k ...passed 00:09:30.546 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.546 Test: blockdev comparev and writev ...[2024-11-20 16:11:01.579092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.546 [2024-11-20 16:11:01.579120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:30.546 [2024-11-20 16:11:01.579134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.546 [2024-11-20 16:11:01.579143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:30.546 [2024-11-20 16:11:01.579381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.546 [2024-11-20 16:11:01.579392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:30.546 [2024-11-20 16:11:01.579404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.546 [2024-11-20 16:11:01.579411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:30.546 [2024-11-20 16:11:01.579656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.546 [2024-11-20 16:11:01.579666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:30.546 [2024-11-20 16:11:01.579677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.546 [2024-11-20 16:11:01.579689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:30.546 [2024-11-20 16:11:01.579920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.546 [2024-11-20 16:11:01.579930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:30.546 [2024-11-20 16:11:01.579941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.546 [2024-11-20 16:11:01.579948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:30.546 passed 00:09:30.546 Test: blockdev nvme passthru rw ...passed 00:09:30.546 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:11:01.661572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:30.546 [2024-11-20 16:11:01.661587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:30.546 [2024-11-20 16:11:01.661696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:30.546 [2024-11-20 16:11:01.661707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:30.546 [2024-11-20 16:11:01.661823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:30.546 [2024-11-20 16:11:01.661833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:30.546 [2024-11-20 16:11:01.661951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:30.546 [2024-11-20 16:11:01.661961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:30.546 passed 00:09:30.546 Test: blockdev nvme admin passthru ...passed 00:09:30.546 Test: blockdev copy ...passed 00:09:30.546 00:09:30.546 Run Summary: Type Total Ran Passed Failed Inactive 00:09:30.546 suites 1 1 n/a 0 0 00:09:30.546 tests 23 23 23 0 0 00:09:30.546 asserts 152 152 152 0 n/a 00:09:30.546 00:09:30.546 Elapsed time = 0.965 seconds 00:09:30.805 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.806 rmmod nvme_tcp 00:09:30.806 rmmod nvme_fabrics 00:09:30.806 rmmod nvme_keyring 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1815057 ']' 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1815057 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1815057 ']' 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1815057 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815057 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815057' 00:09:30.806 killing process with pid 1815057 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1815057 00:09:30.806 16:11:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1815057 00:09:31.065 16:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:31.065 16:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:31.065 16:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:31.065 16:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:31.065 16:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:31.065 16:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:31.065 16:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:31.065 16:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:31.065 16:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:31.065 16:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.065 16:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.065 16:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:33.602 00:09:33.602 real 0m10.105s 00:09:33.602 user 0m10.539s 00:09:33.602 sys 0m5.006s 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.602 ************************************ 00:09:33.602 END TEST nvmf_bdevio 00:09:33.602 ************************************ 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:33.602 00:09:33.602 real 4m38.800s 00:09:33.602 user 10m27.794s 00:09:33.602 sys 1m38.147s 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.602 ************************************ 00:09:33.602 END TEST nvmf_target_core 00:09:33.602 ************************************ 00:09:33.602 16:11:04 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:33.602 16:11:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:33.602 16:11:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.602 16:11:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.602 ************************************ 00:09:33.602 START TEST nvmf_target_extra 00:09:33.602 ************************************ 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:33.602 * Looking for test storage... 00:09:33.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.602 --rc genhtml_branch_coverage=1 00:09:33.602 --rc genhtml_function_coverage=1 00:09:33.602 --rc genhtml_legend=1 00:09:33.602 --rc geninfo_all_blocks=1 00:09:33.602 --rc geninfo_unexecuted_blocks=1 00:09:33.602 00:09:33.602 ' 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.602 --rc genhtml_branch_coverage=1 00:09:33.602 --rc genhtml_function_coverage=1 00:09:33.602 --rc genhtml_legend=1 00:09:33.602 --rc geninfo_all_blocks=1 00:09:33.602 --rc geninfo_unexecuted_blocks=1 00:09:33.602 00:09:33.602 ' 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.602 --rc genhtml_branch_coverage=1 00:09:33.602 --rc genhtml_function_coverage=1 00:09:33.602 --rc genhtml_legend=1 00:09:33.602 --rc geninfo_all_blocks=1 00:09:33.602 --rc geninfo_unexecuted_blocks=1 00:09:33.602 00:09:33.602 ' 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.602 --rc genhtml_branch_coverage=1 00:09:33.602 --rc genhtml_function_coverage=1 00:09:33.602 --rc genhtml_legend=1 00:09:33.602 --rc geninfo_all_blocks=1 00:09:33.602 --rc geninfo_unexecuted_blocks=1 00:09:33.602 00:09:33.602 ' 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.602 16:11:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:33.603 ************************************ 00:09:33.603 START TEST nvmf_example 00:09:33.603 ************************************ 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:33.603 * Looking for test storage... 00:09:33.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.603 --rc genhtml_branch_coverage=1 00:09:33.603 --rc genhtml_function_coverage=1 00:09:33.603 --rc genhtml_legend=1 00:09:33.603 --rc geninfo_all_blocks=1 00:09:33.603 --rc geninfo_unexecuted_blocks=1 00:09:33.603 00:09:33.603 ' 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.603 --rc genhtml_branch_coverage=1 00:09:33.603 --rc genhtml_function_coverage=1 00:09:33.603 --rc genhtml_legend=1 00:09:33.603 --rc geninfo_all_blocks=1 00:09:33.603 --rc geninfo_unexecuted_blocks=1 00:09:33.603 00:09:33.603 ' 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.603 --rc genhtml_branch_coverage=1 00:09:33.603 --rc genhtml_function_coverage=1 00:09:33.603 --rc genhtml_legend=1 00:09:33.603 --rc geninfo_all_blocks=1 00:09:33.603 --rc geninfo_unexecuted_blocks=1 00:09:33.603 00:09:33.603 ' 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.603 --rc genhtml_branch_coverage=1 00:09:33.603 --rc genhtml_function_coverage=1 00:09:33.603 --rc genhtml_legend=1 00:09:33.603 --rc geninfo_all_blocks=1 00:09:33.603 --rc geninfo_unexecuted_blocks=1 00:09:33.603 00:09:33.603 ' 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.603 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:33.604 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:40.173 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:40.173 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:40.173 Found net devices under 0000:86:00.0: cvl_0_0 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:40.173 Found net devices under 0000:86:00.1: cvl_0_1 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.173 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:40.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:09:40.174 00:09:40.174 --- 10.0.0.2 ping statistics --- 00:09:40.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.174 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:09:40.174 00:09:40.174 --- 10.0.0.1 ping statistics --- 00:09:40.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.174 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1819065 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1819065 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1819065 ']' 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.174 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:40.739 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:52.928 Initializing NVMe Controllers 00:09:52.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:52.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:52.928 Initialization complete. Launching workers. 00:09:52.928 ======================================================== 00:09:52.928 Latency(us) 00:09:52.928 Device Information : IOPS MiB/s Average min max 00:09:52.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18201.00 71.10 3515.78 614.45 15905.43 00:09:52.928 ======================================================== 00:09:52.928 Total : 18201.00 71.10 3515.78 614.45 15905.43 00:09:52.928 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.928 rmmod nvme_tcp 00:09:52.928 rmmod nvme_fabrics 00:09:52.928 rmmod nvme_keyring 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1819065 ']' 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1819065 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1819065 ']' 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1819065 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1819065 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1819065' 00:09:52.928 killing process with pid 1819065 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1819065 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1819065 00:09:52.928 nvmf threads initialize successfully 00:09:52.928 bdev subsystem init successfully 00:09:52.928 created a nvmf target service 00:09:52.928 create targets's poll groups done 00:09:52.928 all subsystems of target started 00:09:52.928 nvmf target is running 00:09:52.928 all subsystems of target stopped 00:09:52.928 destroy targets's poll groups done 00:09:52.928 destroyed the nvmf target service 00:09:52.928 bdev subsystem finish successfully 00:09:52.928 nvmf threads destroy successfully 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.928 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.187 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.187 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:53.187 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:53.187 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.446 00:09:53.446 real 0m19.849s 00:09:53.446 user 0m46.028s 00:09:53.446 sys 0m6.078s 00:09:53.446 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.446 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.446 ************************************ 00:09:53.446 END TEST nvmf_example 00:09:53.446 ************************************ 00:09:53.446 16:11:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:53.446 16:11:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.446 16:11:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.446 16:11:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:53.446 ************************************ 00:09:53.446 START TEST nvmf_filesystem 00:09:53.446 ************************************ 00:09:53.446 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:53.446 * Looking for test storage... 00:09:53.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.446 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.446 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.446 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.446 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.446 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.447 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.709 --rc genhtml_branch_coverage=1 00:09:53.709 --rc genhtml_function_coverage=1 00:09:53.709 --rc genhtml_legend=1 00:09:53.709 --rc geninfo_all_blocks=1 00:09:53.709 --rc geninfo_unexecuted_blocks=1 00:09:53.709 00:09:53.709 ' 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.709 --rc genhtml_branch_coverage=1 00:09:53.709 --rc genhtml_function_coverage=1 00:09:53.709 --rc genhtml_legend=1 00:09:53.709 --rc geninfo_all_blocks=1 00:09:53.709 --rc geninfo_unexecuted_blocks=1 00:09:53.709 00:09:53.709 ' 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.709 --rc genhtml_branch_coverage=1 00:09:53.709 --rc genhtml_function_coverage=1 00:09:53.709 --rc genhtml_legend=1 00:09:53.709 --rc geninfo_all_blocks=1 00:09:53.709 --rc geninfo_unexecuted_blocks=1 00:09:53.709 00:09:53.709 ' 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.709 --rc genhtml_branch_coverage=1 00:09:53.709 --rc genhtml_function_coverage=1 00:09:53.709 --rc genhtml_legend=1 00:09:53.709 --rc geninfo_all_blocks=1 00:09:53.709 --rc geninfo_unexecuted_blocks=1 00:09:53.709 00:09:53.709 ' 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:53.709 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:53.710 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:53.711 #define SPDK_CONFIG_H 00:09:53.711 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:53.711 #define SPDK_CONFIG_APPS 1 00:09:53.711 #define SPDK_CONFIG_ARCH native 00:09:53.711 #undef SPDK_CONFIG_ASAN 00:09:53.711 #undef SPDK_CONFIG_AVAHI 00:09:53.711 #undef SPDK_CONFIG_CET 00:09:53.711 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:53.711 #define SPDK_CONFIG_COVERAGE 1 00:09:53.711 #define SPDK_CONFIG_CROSS_PREFIX 00:09:53.711 #undef SPDK_CONFIG_CRYPTO 00:09:53.711 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:53.711 #undef SPDK_CONFIG_CUSTOMOCF 00:09:53.711 #undef SPDK_CONFIG_DAOS 00:09:53.711 #define SPDK_CONFIG_DAOS_DIR 00:09:53.711 #define SPDK_CONFIG_DEBUG 1 00:09:53.711 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:53.711 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:53.711 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:53.711 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:53.711 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:53.711 #undef SPDK_CONFIG_DPDK_UADK 00:09:53.711 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:53.711 #define SPDK_CONFIG_EXAMPLES 1 00:09:53.711 #undef SPDK_CONFIG_FC 00:09:53.711 #define SPDK_CONFIG_FC_PATH 00:09:53.711 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:53.711 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:53.711 #define SPDK_CONFIG_FSDEV 1 00:09:53.711 #undef SPDK_CONFIG_FUSE 00:09:53.711 #undef SPDK_CONFIG_FUZZER 00:09:53.711 #define SPDK_CONFIG_FUZZER_LIB 00:09:53.711 #undef SPDK_CONFIG_GOLANG 00:09:53.711 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:53.711 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:53.711 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:53.711 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:53.711 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:53.711 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:53.711 #undef SPDK_CONFIG_HAVE_LZ4 00:09:53.711 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:53.711 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:53.711 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:53.711 #define SPDK_CONFIG_IDXD 1 00:09:53.711 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:53.711 #undef SPDK_CONFIG_IPSEC_MB 00:09:53.711 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:53.711 #define SPDK_CONFIG_ISAL 1 00:09:53.711 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:53.711 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:53.711 #define SPDK_CONFIG_LIBDIR 00:09:53.711 #undef SPDK_CONFIG_LTO 00:09:53.711 #define SPDK_CONFIG_MAX_LCORES 128 00:09:53.711 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:53.711 #define SPDK_CONFIG_NVME_CUSE 1 00:09:53.711 #undef SPDK_CONFIG_OCF 00:09:53.711 #define SPDK_CONFIG_OCF_PATH 00:09:53.711 #define SPDK_CONFIG_OPENSSL_PATH 00:09:53.711 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:53.711 #define SPDK_CONFIG_PGO_DIR 00:09:53.711 #undef SPDK_CONFIG_PGO_USE 00:09:53.711 #define SPDK_CONFIG_PREFIX /usr/local 00:09:53.711 #undef SPDK_CONFIG_RAID5F 00:09:53.711 #undef SPDK_CONFIG_RBD 00:09:53.711 #define SPDK_CONFIG_RDMA 1 00:09:53.711 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:53.711 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:53.711 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:53.711 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:53.711 #define SPDK_CONFIG_SHARED 1 00:09:53.711 #undef SPDK_CONFIG_SMA 00:09:53.711 #define SPDK_CONFIG_TESTS 1 00:09:53.711 #undef SPDK_CONFIG_TSAN 00:09:53.711 #define SPDK_CONFIG_UBLK 1 00:09:53.711 #define SPDK_CONFIG_UBSAN 1 00:09:53.711 #undef SPDK_CONFIG_UNIT_TESTS 00:09:53.711 #undef SPDK_CONFIG_URING 00:09:53.711 #define SPDK_CONFIG_URING_PATH 00:09:53.711 #undef SPDK_CONFIG_URING_ZNS 00:09:53.711 #undef SPDK_CONFIG_USDT 00:09:53.711 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:53.711 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:53.711 #define SPDK_CONFIG_VFIO_USER 1 00:09:53.711 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:53.711 #define SPDK_CONFIG_VHOST 1 00:09:53.711 #define SPDK_CONFIG_VIRTIO 1 00:09:53.711 #undef SPDK_CONFIG_VTUNE 00:09:53.711 #define SPDK_CONFIG_VTUNE_DIR 00:09:53.711 #define SPDK_CONFIG_WERROR 1 00:09:53.711 #define SPDK_CONFIG_WPDK_DIR 00:09:53.711 #undef SPDK_CONFIG_XNVME 00:09:53.711 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:53.711 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:53.712 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:53.713 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1821336 ]] 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1821336 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.2mI54h 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.2mI54h/tests/target /tmp/spdk.2mI54h 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189140852736 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963973632 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6823120896 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97970618368 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169753088 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981222912 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981988864 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=765952 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:53.714 * Looking for test storage... 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189140852736 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9037713408 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.714 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.715 --rc genhtml_branch_coverage=1 00:09:53.715 --rc genhtml_function_coverage=1 00:09:53.715 --rc genhtml_legend=1 00:09:53.715 --rc geninfo_all_blocks=1 00:09:53.715 --rc geninfo_unexecuted_blocks=1 00:09:53.715 00:09:53.715 ' 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.715 --rc genhtml_branch_coverage=1 00:09:53.715 --rc genhtml_function_coverage=1 00:09:53.715 --rc genhtml_legend=1 00:09:53.715 --rc geninfo_all_blocks=1 00:09:53.715 --rc geninfo_unexecuted_blocks=1 00:09:53.715 00:09:53.715 ' 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.715 --rc genhtml_branch_coverage=1 00:09:53.715 --rc genhtml_function_coverage=1 00:09:53.715 --rc genhtml_legend=1 00:09:53.715 --rc geninfo_all_blocks=1 00:09:53.715 --rc geninfo_unexecuted_blocks=1 00:09:53.715 00:09:53.715 ' 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.715 --rc genhtml_branch_coverage=1 00:09:53.715 --rc genhtml_function_coverage=1 00:09:53.715 --rc genhtml_legend=1 00:09:53.715 --rc geninfo_all_blocks=1 00:09:53.715 --rc geninfo_unexecuted_blocks=1 00:09:53.715 00:09:53.715 ' 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.715 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.975 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.651 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:00.652 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:00.652 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:00.652 Found net devices under 0000:86:00.0: cvl_0_0 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:00.652 Found net devices under 0000:86:00.1: cvl_0_1 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:00.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:10:00.652 00:10:00.652 --- 10.0.0.2 ping statistics --- 00:10:00.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.652 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:10:00.652 00:10:00.652 --- 10.0.0.1 ping statistics --- 00:10:00.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.652 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.652 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.652 ************************************ 00:10:00.652 START TEST nvmf_filesystem_no_in_capsule 00:10:00.652 ************************************ 00:10:00.652 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:00.652 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:00.652 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:00.652 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:00.652 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.652 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.652 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1824570 00:10:00.652 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1824570 00:10:00.652 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.652 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1824570 ']' 00:10:00.652 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.653 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.653 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.653 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.653 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.653 [2024-11-20 16:11:31.090965] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:10:00.653 [2024-11-20 16:11:31.091029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.653 [2024-11-20 16:11:31.169426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.653 [2024-11-20 16:11:31.210690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.653 [2024-11-20 16:11:31.210728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.653 [2024-11-20 16:11:31.210735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.653 [2024-11-20 16:11:31.210741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.653 [2024-11-20 16:11:31.210746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.653 [2024-11-20 16:11:31.212142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.653 [2024-11-20 16:11:31.212256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.653 [2024-11-20 16:11:31.212319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.653 [2024-11-20 16:11:31.212321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.913 [2024-11-20 16:11:31.975922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.913 16:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.913 Malloc1 00:10:00.913 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.913 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:00.913 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.913 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.913 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.913 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:00.913 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.913 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.913 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.913 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.913 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.913 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.913 [2024-11-20 16:11:32.138423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.913 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:01.170 { 00:10:01.170 "name": "Malloc1", 00:10:01.170 "aliases": [ 00:10:01.170 "4e284230-382f-40f7-bd01-38b6c89ad19d" 00:10:01.170 ], 00:10:01.170 "product_name": "Malloc disk", 00:10:01.170 "block_size": 512, 00:10:01.170 "num_blocks": 1048576, 00:10:01.170 "uuid": "4e284230-382f-40f7-bd01-38b6c89ad19d", 00:10:01.170 "assigned_rate_limits": { 00:10:01.170 "rw_ios_per_sec": 0, 00:10:01.170 "rw_mbytes_per_sec": 0, 00:10:01.170 "r_mbytes_per_sec": 0, 00:10:01.170 "w_mbytes_per_sec": 0 00:10:01.170 }, 00:10:01.170 "claimed": true, 00:10:01.170 "claim_type": "exclusive_write", 00:10:01.170 "zoned": false, 00:10:01.170 "supported_io_types": { 00:10:01.170 "read": true, 00:10:01.170 "write": true, 00:10:01.170 "unmap": true, 00:10:01.170 "flush": true, 00:10:01.170 "reset": true, 00:10:01.170 "nvme_admin": false, 00:10:01.170 "nvme_io": false, 00:10:01.170 "nvme_io_md": false, 00:10:01.170 "write_zeroes": true, 00:10:01.170 "zcopy": true, 00:10:01.170 "get_zone_info": false, 00:10:01.170 "zone_management": false, 00:10:01.170 "zone_append": false, 00:10:01.170 "compare": false, 00:10:01.170 "compare_and_write": false, 00:10:01.170 "abort": true, 00:10:01.170 "seek_hole": false, 00:10:01.170 "seek_data": false, 00:10:01.170 "copy": true, 00:10:01.170 "nvme_iov_md": false 00:10:01.170 }, 00:10:01.170 "memory_domains": [ 00:10:01.170 { 00:10:01.170 "dma_device_id": "system", 00:10:01.170 "dma_device_type": 1 00:10:01.170 }, 00:10:01.170 { 00:10:01.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.170 "dma_device_type": 2 00:10:01.170 } 00:10:01.170 ], 00:10:01.170 "driver_specific": {} 00:10:01.170 } 00:10:01.170 ]' 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:01.170 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:01.171 16:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:02.545 16:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:02.545 16:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:02.545 16:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.545 16:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:02.545 16:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:04.443 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:04.444 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:04.444 16:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:05.009 16:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:05.941 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:05.941 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:05.941 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:05.941 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.941 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.199 ************************************ 00:10:06.199 START TEST filesystem_ext4 00:10:06.199 ************************************ 00:10:06.199 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:06.199 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:06.199 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:06.199 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:06.199 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:06.199 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:06.199 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:06.199 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:06.200 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:06.200 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:06.200 16:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:06.200 mke2fs 1.47.0 (5-Feb-2023) 00:10:06.200 Discarding device blocks: 0/522240 done 00:10:06.200 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:06.200 Filesystem UUID: 04e52958-ff0d-436c-985e-6b2912637ae2 00:10:06.200 Superblock backups stored on blocks: 00:10:06.200 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:06.200 00:10:06.200 Allocating group tables: 0/64 done 00:10:06.200 Writing inode tables: 0/64 done 00:10:06.458 Creating journal (8192 blocks): done 00:10:08.670 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:10:08.670 00:10:08.670 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:08.670 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:15.253 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:15.253 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:15.253 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:15.253 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:15.253 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1824570 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:15.254 00:10:15.254 real 0m8.489s 00:10:15.254 user 0m0.035s 00:10:15.254 sys 0m0.070s 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:15.254 ************************************ 00:10:15.254 END TEST filesystem_ext4 00:10:15.254 ************************************ 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.254 ************************************ 00:10:15.254 START TEST filesystem_btrfs 00:10:15.254 ************************************ 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:15.254 btrfs-progs v6.8.1 00:10:15.254 See https://btrfs.readthedocs.io for more information. 00:10:15.254 00:10:15.254 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:15.254 NOTE: several default settings have changed in version 5.15, please make sure 00:10:15.254 this does not affect your deployments: 00:10:15.254 - DUP for metadata (-m dup) 00:10:15.254 - enabled no-holes (-O no-holes) 00:10:15.254 - enabled free-space-tree (-R free-space-tree) 00:10:15.254 00:10:15.254 Label: (null) 00:10:15.254 UUID: a102ab57-36af-4b7a-aaf3-ece037bef68b 00:10:15.254 Node size: 16384 00:10:15.254 Sector size: 4096 (CPU page size: 4096) 00:10:15.254 Filesystem size: 510.00MiB 00:10:15.254 Block group profiles: 00:10:15.254 Data: single 8.00MiB 00:10:15.254 Metadata: DUP 32.00MiB 00:10:15.254 System: DUP 8.00MiB 00:10:15.254 SSD detected: yes 00:10:15.254 Zoned device: no 00:10:15.254 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:15.254 Checksum: crc32c 00:10:15.254 Number of devices: 1 00:10:15.254 Devices: 00:10:15.254 ID SIZE PATH 00:10:15.254 1 510.00MiB /dev/nvme0n1p1 00:10:15.254 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:15.254 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1824570 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:15.254 00:10:15.254 real 0m0.575s 00:10:15.254 user 0m0.026s 00:10:15.254 sys 0m0.114s 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:15.254 ************************************ 00:10:15.254 END TEST filesystem_btrfs 00:10:15.254 ************************************ 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.254 ************************************ 00:10:15.254 START TEST filesystem_xfs 00:10:15.254 ************************************ 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:15.254 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:15.254 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:15.254 = sectsz=512 attr=2, projid32bit=1 00:10:15.254 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:15.254 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:15.254 data = bsize=4096 blocks=130560, imaxpct=25 00:10:15.254 = sunit=0 swidth=0 blks 00:10:15.254 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:15.254 log =internal log bsize=4096 blocks=16384, version=2 00:10:15.254 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:15.254 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:16.185 Discarding blocks...Done. 00:10:16.185 16:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:16.185 16:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:18.083 16:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1824570 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:18.083 00:10:18.083 real 0m2.776s 00:10:18.083 user 0m0.024s 00:10:18.083 sys 0m0.076s 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:18.083 ************************************ 00:10:18.083 END TEST filesystem_xfs 00:10:18.083 ************************************ 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:18.083 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1824570 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1824570 ']' 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1824570 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1824570 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1824570' 00:10:18.340 killing process with pid 1824570 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1824570 00:10:18.340 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1824570 00:10:18.597 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:18.598 00:10:18.598 real 0m18.672s 00:10:18.598 user 1m13.668s 00:10:18.598 sys 0m1.470s 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.598 ************************************ 00:10:18.598 END TEST nvmf_filesystem_no_in_capsule 00:10:18.598 ************************************ 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:18.598 ************************************ 00:10:18.598 START TEST nvmf_filesystem_in_capsule 00:10:18.598 ************************************ 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1827798 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1827798 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1827798 ']' 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.598 16:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.856 [2024-11-20 16:11:49.833058] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:10:18.856 [2024-11-20 16:11:49.833108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.856 [2024-11-20 16:11:49.913942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.856 [2024-11-20 16:11:49.954060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.856 [2024-11-20 16:11:49.954098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.856 [2024-11-20 16:11:49.954105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.856 [2024-11-20 16:11:49.954111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.856 [2024-11-20 16:11:49.954116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.856 [2024-11-20 16:11:49.955678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.856 [2024-11-20 16:11:49.955789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.856 [2024-11-20 16:11:49.955896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.856 [2024-11-20 16:11:49.955897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.856 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.856 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:18.856 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.856 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.856 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.115 [2024-11-20 16:11:50.103452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.115 Malloc1 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.115 [2024-11-20 16:11:50.257382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:19.115 { 00:10:19.115 "name": "Malloc1", 00:10:19.115 "aliases": [ 00:10:19.115 "41a2fdda-98c9-4b75-9fc6-d6a0549bd23a" 00:10:19.115 ], 00:10:19.115 "product_name": "Malloc disk", 00:10:19.115 "block_size": 512, 00:10:19.115 "num_blocks": 1048576, 00:10:19.115 "uuid": "41a2fdda-98c9-4b75-9fc6-d6a0549bd23a", 00:10:19.115 "assigned_rate_limits": { 00:10:19.115 "rw_ios_per_sec": 0, 00:10:19.115 "rw_mbytes_per_sec": 0, 00:10:19.115 "r_mbytes_per_sec": 0, 00:10:19.115 "w_mbytes_per_sec": 0 00:10:19.115 }, 00:10:19.115 "claimed": true, 00:10:19.115 "claim_type": "exclusive_write", 00:10:19.115 "zoned": false, 00:10:19.115 "supported_io_types": { 00:10:19.115 "read": true, 00:10:19.115 "write": true, 00:10:19.115 "unmap": true, 00:10:19.115 "flush": true, 00:10:19.115 "reset": true, 00:10:19.115 "nvme_admin": false, 00:10:19.115 "nvme_io": false, 00:10:19.115 "nvme_io_md": false, 00:10:19.115 "write_zeroes": true, 00:10:19.115 "zcopy": true, 00:10:19.115 "get_zone_info": false, 00:10:19.115 "zone_management": false, 00:10:19.115 "zone_append": false, 00:10:19.115 "compare": false, 00:10:19.115 "compare_and_write": false, 00:10:19.115 "abort": true, 00:10:19.115 "seek_hole": false, 00:10:19.115 "seek_data": false, 00:10:19.115 "copy": true, 00:10:19.115 "nvme_iov_md": false 00:10:19.115 }, 00:10:19.115 "memory_domains": [ 00:10:19.115 { 00:10:19.115 "dma_device_id": "system", 00:10:19.115 "dma_device_type": 1 00:10:19.115 }, 00:10:19.115 { 00:10:19.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.115 "dma_device_type": 2 00:10:19.115 } 00:10:19.115 ], 00:10:19.115 "driver_specific": {} 00:10:19.115 } 00:10:19.115 ]' 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:19.115 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:19.373 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:19.373 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:19.373 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:19.373 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:19.373 16:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:20.306 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:20.306 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:20.306 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.306 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:20.306 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:22.830 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:23.087 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.020 ************************************ 00:10:24.020 START TEST filesystem_in_capsule_ext4 00:10:24.020 ************************************ 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:24.020 16:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:24.020 mke2fs 1.47.0 (5-Feb-2023) 00:10:24.278 Discarding device blocks: 0/522240 done 00:10:24.278 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:24.278 Filesystem UUID: 98c09349-9a22-406b-8d5f-e109bc9497e2 00:10:24.278 Superblock backups stored on blocks: 00:10:24.278 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:24.278 00:10:24.278 Allocating group tables: 0/64 done 00:10:24.278 Writing inode tables: 0/64 done 00:10:24.278 Creating journal (8192 blocks): done 00:10:25.910 Writing superblocks and filesystem accounting information: 0/64 done 00:10:25.910 00:10:25.910 16:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:25.910 16:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:32.465 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:32.465 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:32.465 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:32.465 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:32.465 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:32.465 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:32.465 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1827798 00:10:32.465 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:32.465 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:32.465 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:32.465 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:32.465 00:10:32.465 real 0m7.967s 00:10:32.465 user 0m0.026s 00:10:32.465 sys 0m0.077s 00:10:32.465 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.465 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:32.465 ************************************ 00:10:32.465 END TEST filesystem_in_capsule_ext4 00:10:32.466 ************************************ 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.466 ************************************ 00:10:32.466 START TEST filesystem_in_capsule_btrfs 00:10:32.466 ************************************ 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:32.466 btrfs-progs v6.8.1 00:10:32.466 See https://btrfs.readthedocs.io for more information. 00:10:32.466 00:10:32.466 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:32.466 NOTE: several default settings have changed in version 5.15, please make sure 00:10:32.466 this does not affect your deployments: 00:10:32.466 - DUP for metadata (-m dup) 00:10:32.466 - enabled no-holes (-O no-holes) 00:10:32.466 - enabled free-space-tree (-R free-space-tree) 00:10:32.466 00:10:32.466 Label: (null) 00:10:32.466 UUID: c68911a8-45c3-4139-aead-283a8ba83845 00:10:32.466 Node size: 16384 00:10:32.466 Sector size: 4096 (CPU page size: 4096) 00:10:32.466 Filesystem size: 510.00MiB 00:10:32.466 Block group profiles: 00:10:32.466 Data: single 8.00MiB 00:10:32.466 Metadata: DUP 32.00MiB 00:10:32.466 System: DUP 8.00MiB 00:10:32.466 SSD detected: yes 00:10:32.466 Zoned device: no 00:10:32.466 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:32.466 Checksum: crc32c 00:10:32.466 Number of devices: 1 00:10:32.466 Devices: 00:10:32.466 ID SIZE PATH 00:10:32.466 1 510.00MiB /dev/nvme0n1p1 00:10:32.466 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:32.466 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1827798 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:32.724 00:10:32.724 real 0m0.698s 00:10:32.724 user 0m0.030s 00:10:32.724 sys 0m0.115s 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:32.724 ************************************ 00:10:32.724 END TEST filesystem_in_capsule_btrfs 00:10:32.724 ************************************ 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.724 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.981 ************************************ 00:10:32.981 START TEST filesystem_in_capsule_xfs 00:10:32.981 ************************************ 00:10:32.981 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:32.981 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:32.981 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:32.981 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:32.981 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:32.981 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:32.981 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:32.982 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:32.982 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:32.982 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:32.982 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:32.982 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:32.982 = sectsz=512 attr=2, projid32bit=1 00:10:32.982 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:32.982 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:32.982 data = bsize=4096 blocks=130560, imaxpct=25 00:10:32.982 = sunit=0 swidth=0 blks 00:10:32.982 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:32.982 log =internal log bsize=4096 blocks=16384, version=2 00:10:32.982 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:32.982 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:33.915 Discarding blocks...Done. 00:10:33.915 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:33.915 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:36.441 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:36.441 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:36.441 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:36.441 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:36.441 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:36.441 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:36.441 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1827798 00:10:36.441 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:36.441 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:36.441 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:36.441 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:36.441 00:10:36.441 real 0m3.247s 00:10:36.441 user 0m0.019s 00:10:36.441 sys 0m0.080s 00:10:36.441 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:36.442 ************************************ 00:10:36.442 END TEST filesystem_in_capsule_xfs 00:10:36.442 ************************************ 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.442 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.699 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.699 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:36.699 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1827798 00:10:36.699 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1827798 ']' 00:10:36.699 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1827798 00:10:36.699 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:36.699 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.699 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1827798 00:10:36.699 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.699 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.699 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1827798' 00:10:36.699 killing process with pid 1827798 00:10:36.699 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1827798 00:10:36.699 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1827798 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:36.959 00:10:36.959 real 0m18.278s 00:10:36.959 user 1m11.974s 00:10:36.959 sys 0m1.416s 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.959 ************************************ 00:10:36.959 END TEST nvmf_filesystem_in_capsule 00:10:36.959 ************************************ 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.959 rmmod nvme_tcp 00:10:36.959 rmmod nvme_fabrics 00:10:36.959 rmmod nvme_keyring 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.959 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:39.497 00:10:39.497 real 0m45.735s 00:10:39.497 user 2m27.694s 00:10:39.497 sys 0m7.642s 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.497 ************************************ 00:10:39.497 END TEST nvmf_filesystem 00:10:39.497 ************************************ 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:39.497 ************************************ 00:10:39.497 START TEST nvmf_target_discovery 00:10:39.497 ************************************ 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:39.497 * Looking for test storage... 00:10:39.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.497 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:39.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.498 --rc genhtml_branch_coverage=1 00:10:39.498 --rc genhtml_function_coverage=1 00:10:39.498 --rc genhtml_legend=1 00:10:39.498 --rc geninfo_all_blocks=1 00:10:39.498 --rc geninfo_unexecuted_blocks=1 00:10:39.498 00:10:39.498 ' 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:39.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.498 --rc genhtml_branch_coverage=1 00:10:39.498 --rc genhtml_function_coverage=1 00:10:39.498 --rc genhtml_legend=1 00:10:39.498 --rc geninfo_all_blocks=1 00:10:39.498 --rc geninfo_unexecuted_blocks=1 00:10:39.498 00:10:39.498 ' 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:39.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.498 --rc genhtml_branch_coverage=1 00:10:39.498 --rc genhtml_function_coverage=1 00:10:39.498 --rc genhtml_legend=1 00:10:39.498 --rc geninfo_all_blocks=1 00:10:39.498 --rc geninfo_unexecuted_blocks=1 00:10:39.498 00:10:39.498 ' 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:39.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.498 --rc genhtml_branch_coverage=1 00:10:39.498 --rc genhtml_function_coverage=1 00:10:39.498 --rc genhtml_legend=1 00:10:39.498 --rc geninfo_all_blocks=1 00:10:39.498 --rc geninfo_unexecuted_blocks=1 00:10:39.498 00:10:39.498 ' 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:39.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:39.498 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:39.499 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.499 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.499 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.499 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:39.499 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:39.499 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:39.499 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:46.132 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:46.132 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.132 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:46.133 Found net devices under 0000:86:00.0: cvl_0_0 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:46.133 Found net devices under 0000:86:00.1: cvl_0_1 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:46.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:10:46.133 00:10:46.133 --- 10.0.0.2 ping statistics --- 00:10:46.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.133 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:10:46.133 00:10:46.133 --- 10.0.0.1 ping statistics --- 00:10:46.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.133 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1835063 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1835063 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1835063 ']' 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.133 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.133 [2024-11-20 16:12:16.588065] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:10:46.133 [2024-11-20 16:12:16.588107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.133 [2024-11-20 16:12:16.669608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.133 [2024-11-20 16:12:16.711412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.133 [2024-11-20 16:12:16.711453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.133 [2024-11-20 16:12:16.711460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.133 [2024-11-20 16:12:16.711466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.133 [2024-11-20 16:12:16.711471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.133 [2024-11-20 16:12:16.713033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.133 [2024-11-20 16:12:16.713052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.133 [2024-11-20 16:12:16.713172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.133 [2024-11-20 16:12:16.713173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.392 [2024-11-20 16:12:17.481009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.392 Null1 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.392 [2024-11-20 16:12:17.539353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.392 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.393 Null2 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.393 Null3 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.393 Null4 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.393 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:46.652 00:10:46.652 Discovery Log Number of Records 6, Generation counter 6 00:10:46.652 =====Discovery Log Entry 0====== 00:10:46.652 trtype: tcp 00:10:46.652 adrfam: ipv4 00:10:46.652 subtype: current discovery subsystem 00:10:46.652 treq: not required 00:10:46.652 portid: 0 00:10:46.652 trsvcid: 4420 00:10:46.652 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:46.652 traddr: 10.0.0.2 00:10:46.652 eflags: explicit discovery connections, duplicate discovery information 00:10:46.652 sectype: none 00:10:46.652 =====Discovery Log Entry 1====== 00:10:46.652 trtype: tcp 00:10:46.652 adrfam: ipv4 00:10:46.652 subtype: nvme subsystem 00:10:46.652 treq: not required 00:10:46.652 portid: 0 00:10:46.652 trsvcid: 4420 00:10:46.652 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:46.652 traddr: 10.0.0.2 00:10:46.652 eflags: none 00:10:46.652 sectype: none 00:10:46.652 =====Discovery Log Entry 2====== 00:10:46.652 trtype: tcp 00:10:46.652 adrfam: ipv4 00:10:46.652 subtype: nvme subsystem 00:10:46.652 treq: not required 00:10:46.652 portid: 0 00:10:46.652 trsvcid: 4420 00:10:46.652 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:46.652 traddr: 10.0.0.2 00:10:46.652 eflags: none 00:10:46.652 sectype: none 00:10:46.652 =====Discovery Log Entry 3====== 00:10:46.652 trtype: tcp 00:10:46.652 adrfam: ipv4 00:10:46.652 subtype: nvme subsystem 00:10:46.652 treq: not required 00:10:46.652 portid: 0 00:10:46.652 trsvcid: 4420 00:10:46.652 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:46.652 traddr: 10.0.0.2 00:10:46.652 eflags: none 00:10:46.652 sectype: none 00:10:46.652 =====Discovery Log Entry 4====== 00:10:46.652 trtype: tcp 00:10:46.652 adrfam: ipv4 00:10:46.652 subtype: nvme subsystem 00:10:46.652 treq: not required 00:10:46.652 portid: 0 00:10:46.652 trsvcid: 4420 00:10:46.652 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:46.652 traddr: 10.0.0.2 00:10:46.652 eflags: none 00:10:46.652 sectype: none 00:10:46.652 =====Discovery Log Entry 5====== 00:10:46.652 trtype: tcp 00:10:46.652 adrfam: ipv4 00:10:46.652 subtype: discovery subsystem referral 00:10:46.652 treq: not required 00:10:46.652 portid: 0 00:10:46.652 trsvcid: 4430 00:10:46.652 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:46.652 traddr: 10.0.0.2 00:10:46.652 eflags: none 00:10:46.652 sectype: none 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:46.652 Perform nvmf subsystem discovery via RPC 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.652 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.911 [ 00:10:46.911 { 00:10:46.911 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:46.911 "subtype": "Discovery", 00:10:46.911 "listen_addresses": [ 00:10:46.911 { 00:10:46.911 "trtype": "TCP", 00:10:46.911 "adrfam": "IPv4", 00:10:46.911 "traddr": "10.0.0.2", 00:10:46.911 "trsvcid": "4420" 00:10:46.911 } 00:10:46.911 ], 00:10:46.911 "allow_any_host": true, 00:10:46.911 "hosts": [] 00:10:46.911 }, 00:10:46.911 { 00:10:46.911 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.911 "subtype": "NVMe", 00:10:46.911 "listen_addresses": [ 00:10:46.911 { 00:10:46.911 "trtype": "TCP", 00:10:46.911 "adrfam": "IPv4", 00:10:46.911 "traddr": "10.0.0.2", 00:10:46.911 "trsvcid": "4420" 00:10:46.911 } 00:10:46.911 ], 00:10:46.911 "allow_any_host": true, 00:10:46.911 "hosts": [], 00:10:46.911 "serial_number": "SPDK00000000000001", 00:10:46.911 "model_number": "SPDK bdev Controller", 00:10:46.911 "max_namespaces": 32, 00:10:46.911 "min_cntlid": 1, 00:10:46.911 "max_cntlid": 65519, 00:10:46.911 "namespaces": [ 00:10:46.911 { 00:10:46.911 "nsid": 1, 00:10:46.911 "bdev_name": "Null1", 00:10:46.911 "name": "Null1", 00:10:46.911 "nguid": "9F77263F89DE43C1B08271114DDF0412", 00:10:46.911 "uuid": "9f77263f-89de-43c1-b082-71114ddf0412" 00:10:46.911 } 00:10:46.911 ] 00:10:46.911 }, 00:10:46.911 { 00:10:46.911 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:46.911 "subtype": "NVMe", 00:10:46.911 "listen_addresses": [ 00:10:46.911 { 00:10:46.911 "trtype": "TCP", 00:10:46.911 "adrfam": "IPv4", 00:10:46.911 "traddr": "10.0.0.2", 00:10:46.911 "trsvcid": "4420" 00:10:46.911 } 00:10:46.911 ], 00:10:46.911 "allow_any_host": true, 00:10:46.911 "hosts": [], 00:10:46.911 "serial_number": "SPDK00000000000002", 00:10:46.911 "model_number": "SPDK bdev Controller", 00:10:46.911 "max_namespaces": 32, 00:10:46.911 "min_cntlid": 1, 00:10:46.911 "max_cntlid": 65519, 00:10:46.911 "namespaces": [ 00:10:46.911 { 00:10:46.911 "nsid": 1, 00:10:46.911 "bdev_name": "Null2", 00:10:46.911 "name": "Null2", 00:10:46.911 "nguid": "B683C99379E84B3DBFDF675D97A613AE", 00:10:46.911 "uuid": "b683c993-79e8-4b3d-bfdf-675d97a613ae" 00:10:46.911 } 00:10:46.911 ] 00:10:46.911 }, 00:10:46.911 { 00:10:46.911 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:46.911 "subtype": "NVMe", 00:10:46.911 "listen_addresses": [ 00:10:46.911 { 00:10:46.911 "trtype": "TCP", 00:10:46.911 "adrfam": "IPv4", 00:10:46.911 "traddr": "10.0.0.2", 00:10:46.911 "trsvcid": "4420" 00:10:46.911 } 00:10:46.911 ], 00:10:46.911 "allow_any_host": true, 00:10:46.911 "hosts": [], 00:10:46.911 "serial_number": "SPDK00000000000003", 00:10:46.911 "model_number": "SPDK bdev Controller", 00:10:46.911 "max_namespaces": 32, 00:10:46.911 "min_cntlid": 1, 00:10:46.911 "max_cntlid": 65519, 00:10:46.911 "namespaces": [ 00:10:46.911 { 00:10:46.911 "nsid": 1, 00:10:46.911 "bdev_name": "Null3", 00:10:46.911 "name": "Null3", 00:10:46.911 "nguid": "6D6EF47DCD53456A959FD34EC5389123", 00:10:46.911 "uuid": "6d6ef47d-cd53-456a-959f-d34ec5389123" 00:10:46.911 } 00:10:46.911 ] 00:10:46.911 }, 00:10:46.911 { 00:10:46.911 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:46.911 "subtype": "NVMe", 00:10:46.911 "listen_addresses": [ 00:10:46.911 { 00:10:46.911 "trtype": "TCP", 00:10:46.911 "adrfam": "IPv4", 00:10:46.911 "traddr": "10.0.0.2", 00:10:46.911 "trsvcid": "4420" 00:10:46.911 } 00:10:46.911 ], 00:10:46.911 "allow_any_host": true, 00:10:46.911 "hosts": [], 00:10:46.911 "serial_number": "SPDK00000000000004", 00:10:46.911 "model_number": "SPDK bdev Controller", 00:10:46.911 "max_namespaces": 32, 00:10:46.911 "min_cntlid": 1, 00:10:46.911 "max_cntlid": 65519, 00:10:46.911 "namespaces": [ 00:10:46.911 { 00:10:46.911 "nsid": 1, 00:10:46.911 "bdev_name": "Null4", 00:10:46.911 "name": "Null4", 00:10:46.911 "nguid": "A1BB795997244CEFA3AA2943527F6DA9", 00:10:46.911 "uuid": "a1bb7959-9724-4cef-a3aa-2943527f6da9" 00:10:46.911 } 00:10:46.912 ] 00:10:46.912 } 00:10:46.912 ] 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.912 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.912 rmmod nvme_tcp 00:10:46.912 rmmod nvme_fabrics 00:10:46.912 rmmod nvme_keyring 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1835063 ']' 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1835063 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1835063 ']' 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1835063 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.912 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1835063 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1835063' 00:10:47.171 killing process with pid 1835063 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1835063 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1835063 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.171 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:49.708 00:10:49.708 real 0m10.074s 00:10:49.708 user 0m8.452s 00:10:49.708 sys 0m4.904s 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.708 ************************************ 00:10:49.708 END TEST nvmf_target_discovery 00:10:49.708 ************************************ 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:49.708 ************************************ 00:10:49.708 START TEST nvmf_referrals 00:10:49.708 ************************************ 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:49.708 * Looking for test storage... 00:10:49.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.708 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.708 --rc genhtml_branch_coverage=1 00:10:49.708 --rc genhtml_function_coverage=1 00:10:49.708 --rc genhtml_legend=1 00:10:49.708 --rc geninfo_all_blocks=1 00:10:49.708 --rc geninfo_unexecuted_blocks=1 00:10:49.708 00:10:49.708 ' 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:49.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.709 --rc genhtml_branch_coverage=1 00:10:49.709 --rc genhtml_function_coverage=1 00:10:49.709 --rc genhtml_legend=1 00:10:49.709 --rc geninfo_all_blocks=1 00:10:49.709 --rc geninfo_unexecuted_blocks=1 00:10:49.709 00:10:49.709 ' 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:49.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.709 --rc genhtml_branch_coverage=1 00:10:49.709 --rc genhtml_function_coverage=1 00:10:49.709 --rc genhtml_legend=1 00:10:49.709 --rc geninfo_all_blocks=1 00:10:49.709 --rc geninfo_unexecuted_blocks=1 00:10:49.709 00:10:49.709 ' 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:49.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.709 --rc genhtml_branch_coverage=1 00:10:49.709 --rc genhtml_function_coverage=1 00:10:49.709 --rc genhtml_legend=1 00:10:49.709 --rc geninfo_all_blocks=1 00:10:49.709 --rc geninfo_unexecuted_blocks=1 00:10:49.709 00:10:49.709 ' 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:49.709 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.275 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.275 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:56.275 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:56.275 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:56.275 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:56.275 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:56.275 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:56.276 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:56.276 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:56.276 Found net devices under 0000:86:00.0: cvl_0_0 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:56.276 Found net devices under 0000:86:00.1: cvl_0_1 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:56.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:10:56.276 00:10:56.276 --- 10.0.0.2 ping statistics --- 00:10:56.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.276 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:10:56.276 00:10:56.276 --- 10.0.0.1 ping statistics --- 00:10:56.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.276 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.276 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1838936 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1838936 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1838936 ']' 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 [2024-11-20 16:12:26.710478] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:10:56.277 [2024-11-20 16:12:26.710530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.277 [2024-11-20 16:12:26.794125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.277 [2024-11-20 16:12:26.836945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.277 [2024-11-20 16:12:26.836980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.277 [2024-11-20 16:12:26.836988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.277 [2024-11-20 16:12:26.836994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.277 [2024-11-20 16:12:26.836999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.277 [2024-11-20 16:12:26.838417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.277 [2024-11-20 16:12:26.838473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.277 [2024-11-20 16:12:26.838585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.277 [2024-11-20 16:12:26.838585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 [2024-11-20 16:12:26.984073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.277 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 [2024-11-20 16:12:27.005348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:56.277 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:56.278 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.278 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:56.278 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:56.535 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:56.535 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:56.536 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:56.793 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:56.793 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:56.793 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:56.793 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:56.793 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:56.793 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.793 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:56.793 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.050 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.307 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:57.307 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:57.308 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:57.308 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:57.308 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:57.308 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.308 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:57.308 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:57.308 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:57.308 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:57.308 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:57.308 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:57.308 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:57.308 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.308 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.565 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.822 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:57.822 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:57.822 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:57.822 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:57.822 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:57.822 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.822 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:57.822 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:57.822 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:57.822 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:57.822 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:57.822 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.822 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:57.822 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.822 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:57.822 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.822 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.822 rmmod nvme_tcp 00:10:58.084 rmmod nvme_fabrics 00:10:58.084 rmmod nvme_keyring 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1838936 ']' 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1838936 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1838936 ']' 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1838936 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1838936 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1838936' 00:10:58.084 killing process with pid 1838936 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1838936 00:10:58.084 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1838936 00:10:58.346 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:58.346 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:58.346 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:58.346 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:58.346 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:58.347 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:58.347 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:58.347 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.347 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:58.347 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.347 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.347 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.251 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:00.251 00:11:00.251 real 0m10.935s 00:11:00.251 user 0m12.410s 00:11:00.251 sys 0m5.330s 00:11:00.251 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.251 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.251 ************************************ 00:11:00.251 END TEST nvmf_referrals 00:11:00.251 ************************************ 00:11:00.251 16:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:00.251 16:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.251 16:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.251 16:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.251 ************************************ 00:11:00.251 START TEST nvmf_connect_disconnect 00:11:00.251 ************************************ 00:11:00.251 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:00.510 * Looking for test storage... 00:11:00.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.510 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.510 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.510 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.510 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.510 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.510 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.510 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.510 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.511 --rc genhtml_branch_coverage=1 00:11:00.511 --rc genhtml_function_coverage=1 00:11:00.511 --rc genhtml_legend=1 00:11:00.511 --rc geninfo_all_blocks=1 00:11:00.511 --rc geninfo_unexecuted_blocks=1 00:11:00.511 00:11:00.511 ' 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.511 --rc genhtml_branch_coverage=1 00:11:00.511 --rc genhtml_function_coverage=1 00:11:00.511 --rc genhtml_legend=1 00:11:00.511 --rc geninfo_all_blocks=1 00:11:00.511 --rc geninfo_unexecuted_blocks=1 00:11:00.511 00:11:00.511 ' 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.511 --rc genhtml_branch_coverage=1 00:11:00.511 --rc genhtml_function_coverage=1 00:11:00.511 --rc genhtml_legend=1 00:11:00.511 --rc geninfo_all_blocks=1 00:11:00.511 --rc geninfo_unexecuted_blocks=1 00:11:00.511 00:11:00.511 ' 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.511 --rc genhtml_branch_coverage=1 00:11:00.511 --rc genhtml_function_coverage=1 00:11:00.511 --rc genhtml_legend=1 00:11:00.511 --rc geninfo_all_blocks=1 00:11:00.511 --rc geninfo_unexecuted_blocks=1 00:11:00.511 00:11:00.511 ' 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.511 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.512 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.512 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.512 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.512 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.512 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.512 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.512 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.512 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.512 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.090 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:07.091 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:07.091 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:07.091 Found net devices under 0000:86:00.0: cvl_0_0 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:07.091 Found net devices under 0000:86:00.1: cvl_0_1 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:07.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:11:07.091 00:11:07.091 --- 10.0.0.2 ping statistics --- 00:11:07.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.091 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:11:07.091 00:11:07.091 --- 10.0.0.1 ping statistics --- 00:11:07.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.091 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1842928 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1842928 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1842928 ']' 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.091 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.092 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.092 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.092 [2024-11-20 16:12:37.751699] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:11:07.092 [2024-11-20 16:12:37.751753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.092 [2024-11-20 16:12:37.829560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.092 [2024-11-20 16:12:37.871522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.092 [2024-11-20 16:12:37.871559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.092 [2024-11-20 16:12:37.871566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.092 [2024-11-20 16:12:37.871573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.092 [2024-11-20 16:12:37.871579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.092 [2024-11-20 16:12:37.873177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.092 [2024-11-20 16:12:37.873289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.092 [2024-11-20 16:12:37.873321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.092 [2024-11-20 16:12:37.873322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.092 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.092 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:07.092 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.092 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.092 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.092 [2024-11-20 16:12:38.015884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.092 [2024-11-20 16:12:38.080470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:07.092 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:10.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.614 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:23.614 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:23.614 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:23.614 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:23.614 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.614 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:23.614 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.614 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.614 rmmod nvme_tcp 00:11:23.614 rmmod nvme_fabrics 00:11:23.614 rmmod nvme_keyring 00:11:23.614 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.614 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:23.614 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:23.614 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1842928 ']' 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1842928 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1842928 ']' 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1842928 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1842928 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1842928' 00:11:23.615 killing process with pid 1842928 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1842928 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1842928 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.615 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.536 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:25.536 00:11:25.536 real 0m25.203s 00:11:25.536 user 1m8.087s 00:11:25.536 sys 0m5.877s 00:11:25.536 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.536 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:25.536 ************************************ 00:11:25.536 END TEST nvmf_connect_disconnect 00:11:25.536 ************************************ 00:11:25.536 16:12:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:25.536 16:12:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:25.536 16:12:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.536 16:12:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:25.536 ************************************ 00:11:25.536 START TEST nvmf_multitarget 00:11:25.536 ************************************ 00:11:25.536 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:25.795 * Looking for test storage... 00:11:25.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:25.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.796 --rc genhtml_branch_coverage=1 00:11:25.796 --rc genhtml_function_coverage=1 00:11:25.796 --rc genhtml_legend=1 00:11:25.796 --rc geninfo_all_blocks=1 00:11:25.796 --rc geninfo_unexecuted_blocks=1 00:11:25.796 00:11:25.796 ' 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:25.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.796 --rc genhtml_branch_coverage=1 00:11:25.796 --rc genhtml_function_coverage=1 00:11:25.796 --rc genhtml_legend=1 00:11:25.796 --rc geninfo_all_blocks=1 00:11:25.796 --rc geninfo_unexecuted_blocks=1 00:11:25.796 00:11:25.796 ' 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:25.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.796 --rc genhtml_branch_coverage=1 00:11:25.796 --rc genhtml_function_coverage=1 00:11:25.796 --rc genhtml_legend=1 00:11:25.796 --rc geninfo_all_blocks=1 00:11:25.796 --rc geninfo_unexecuted_blocks=1 00:11:25.796 00:11:25.796 ' 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:25.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.796 --rc genhtml_branch_coverage=1 00:11:25.796 --rc genhtml_function_coverage=1 00:11:25.796 --rc genhtml_legend=1 00:11:25.796 --rc geninfo_all_blocks=1 00:11:25.796 --rc geninfo_unexecuted_blocks=1 00:11:25.796 00:11:25.796 ' 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.796 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:25.797 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:32.392 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:32.393 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:32.393 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:32.393 Found net devices under 0000:86:00.0: cvl_0_0 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:32.393 Found net devices under 0000:86:00.1: cvl_0_1 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:32.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:11:32.393 00:11:32.393 --- 10.0.0.2 ping statistics --- 00:11:32.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.393 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:11:32.393 00:11:32.393 --- 10.0.0.1 ping statistics --- 00:11:32.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.393 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.393 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:32.394 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1849316 00:11:32.394 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.394 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1849316 00:11:32.394 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1849316 ']' 00:11:32.394 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.394 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.394 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.394 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.394 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:32.394 [2024-11-20 16:13:03.004580] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:11:32.394 [2024-11-20 16:13:03.004624] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.394 [2024-11-20 16:13:03.080448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.394 [2024-11-20 16:13:03.122919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.394 [2024-11-20 16:13:03.122955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.394 [2024-11-20 16:13:03.122962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.394 [2024-11-20 16:13:03.122968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.394 [2024-11-20 16:13:03.122973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.394 [2024-11-20 16:13:03.124392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.394 [2024-11-20 16:13:03.124504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.394 [2024-11-20 16:13:03.124610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.394 [2024-11-20 16:13:03.124611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:32.394 "nvmf_tgt_1" 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:32.394 "nvmf_tgt_2" 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:32.394 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:32.652 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:32.652 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:32.652 true 00:11:32.652 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:32.910 true 00:11:32.910 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:32.910 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.910 rmmod nvme_tcp 00:11:32.910 rmmod nvme_fabrics 00:11:32.910 rmmod nvme_keyring 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1849316 ']' 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1849316 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1849316 ']' 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1849316 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.910 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1849316 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1849316' 00:11:33.169 killing process with pid 1849316 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1849316 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1849316 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.169 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.704 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.704 00:11:35.704 real 0m9.630s 00:11:35.704 user 0m7.240s 00:11:35.704 sys 0m4.912s 00:11:35.704 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.704 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:35.704 ************************************ 00:11:35.704 END TEST nvmf_multitarget 00:11:35.704 ************************************ 00:11:35.704 16:13:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:35.704 16:13:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.704 16:13:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.704 16:13:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.704 ************************************ 00:11:35.704 START TEST nvmf_rpc 00:11:35.704 ************************************ 00:11:35.704 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:35.704 * Looking for test storage... 00:11:35.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.704 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:35.704 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:35.704 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:35.704 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:35.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.705 --rc genhtml_branch_coverage=1 00:11:35.705 --rc genhtml_function_coverage=1 00:11:35.705 --rc genhtml_legend=1 00:11:35.705 --rc geninfo_all_blocks=1 00:11:35.705 --rc geninfo_unexecuted_blocks=1 00:11:35.705 00:11:35.705 ' 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:35.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.705 --rc genhtml_branch_coverage=1 00:11:35.705 --rc genhtml_function_coverage=1 00:11:35.705 --rc genhtml_legend=1 00:11:35.705 --rc geninfo_all_blocks=1 00:11:35.705 --rc geninfo_unexecuted_blocks=1 00:11:35.705 00:11:35.705 ' 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:35.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.705 --rc genhtml_branch_coverage=1 00:11:35.705 --rc genhtml_function_coverage=1 00:11:35.705 --rc genhtml_legend=1 00:11:35.705 --rc geninfo_all_blocks=1 00:11:35.705 --rc geninfo_unexecuted_blocks=1 00:11:35.705 00:11:35.705 ' 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:35.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.705 --rc genhtml_branch_coverage=1 00:11:35.705 --rc genhtml_function_coverage=1 00:11:35.705 --rc genhtml_legend=1 00:11:35.705 --rc geninfo_all_blocks=1 00:11:35.705 --rc geninfo_unexecuted_blocks=1 00:11:35.705 00:11:35.705 ' 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.705 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.706 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.277 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:42.278 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:42.278 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:42.278 Found net devices under 0000:86:00.0: cvl_0_0 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:42.278 Found net devices under 0000:86:00.1: cvl_0_1 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:11:42.278 00:11:42.278 --- 10.0.0.2 ping statistics --- 00:11:42.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.278 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:11:42.278 00:11:42.278 --- 10.0.0.1 ping statistics --- 00:11:42.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.278 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1853096 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1853096 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1853096 ']' 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 [2024-11-20 16:13:12.734365] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:11:42.278 [2024-11-20 16:13:12.734415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.278 [2024-11-20 16:13:12.813105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.278 [2024-11-20 16:13:12.856701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.278 [2024-11-20 16:13:12.856734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.278 [2024-11-20 16:13:12.856741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.278 [2024-11-20 16:13:12.856747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.278 [2024-11-20 16:13:12.856752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.278 [2024-11-20 16:13:12.858353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.278 [2024-11-20 16:13:12.858460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.278 [2024-11-20 16:13:12.858553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.278 [2024-11-20 16:13:12.858555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.278 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:42.279 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.279 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.279 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.279 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.279 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:42.279 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.279 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:42.279 "tick_rate": 2100000000, 00:11:42.279 "poll_groups": [ 00:11:42.279 { 00:11:42.279 "name": "nvmf_tgt_poll_group_000", 00:11:42.279 "admin_qpairs": 0, 00:11:42.279 "io_qpairs": 0, 00:11:42.279 "current_admin_qpairs": 0, 00:11:42.279 "current_io_qpairs": 0, 00:11:42.279 "pending_bdev_io": 0, 00:11:42.279 "completed_nvme_io": 0, 00:11:42.279 "transports": [] 00:11:42.279 }, 00:11:42.279 { 00:11:42.279 "name": "nvmf_tgt_poll_group_001", 00:11:42.279 "admin_qpairs": 0, 00:11:42.279 "io_qpairs": 0, 00:11:42.279 "current_admin_qpairs": 0, 00:11:42.279 "current_io_qpairs": 0, 00:11:42.279 "pending_bdev_io": 0, 00:11:42.279 "completed_nvme_io": 0, 00:11:42.279 "transports": [] 00:11:42.279 }, 00:11:42.279 { 00:11:42.279 "name": "nvmf_tgt_poll_group_002", 00:11:42.279 "admin_qpairs": 0, 00:11:42.279 "io_qpairs": 0, 00:11:42.279 "current_admin_qpairs": 0, 00:11:42.279 "current_io_qpairs": 0, 00:11:42.279 "pending_bdev_io": 0, 00:11:42.279 "completed_nvme_io": 0, 00:11:42.279 "transports": [] 00:11:42.279 }, 00:11:42.279 { 00:11:42.279 "name": "nvmf_tgt_poll_group_003", 00:11:42.279 "admin_qpairs": 0, 00:11:42.279 "io_qpairs": 0, 00:11:42.279 "current_admin_qpairs": 0, 00:11:42.279 "current_io_qpairs": 0, 00:11:42.279 "pending_bdev_io": 0, 00:11:42.279 "completed_nvme_io": 0, 00:11:42.279 "transports": [] 00:11:42.279 } 00:11:42.279 ] 00:11:42.279 }' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.279 [2024-11-20 16:13:13.101197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:42.279 "tick_rate": 2100000000, 00:11:42.279 "poll_groups": [ 00:11:42.279 { 00:11:42.279 "name": "nvmf_tgt_poll_group_000", 00:11:42.279 "admin_qpairs": 0, 00:11:42.279 "io_qpairs": 0, 00:11:42.279 "current_admin_qpairs": 0, 00:11:42.279 "current_io_qpairs": 0, 00:11:42.279 "pending_bdev_io": 0, 00:11:42.279 "completed_nvme_io": 0, 00:11:42.279 "transports": [ 00:11:42.279 { 00:11:42.279 "trtype": "TCP" 00:11:42.279 } 00:11:42.279 ] 00:11:42.279 }, 00:11:42.279 { 00:11:42.279 "name": "nvmf_tgt_poll_group_001", 00:11:42.279 "admin_qpairs": 0, 00:11:42.279 "io_qpairs": 0, 00:11:42.279 "current_admin_qpairs": 0, 00:11:42.279 "current_io_qpairs": 0, 00:11:42.279 "pending_bdev_io": 0, 00:11:42.279 "completed_nvme_io": 0, 00:11:42.279 "transports": [ 00:11:42.279 { 00:11:42.279 "trtype": "TCP" 00:11:42.279 } 00:11:42.279 ] 00:11:42.279 }, 00:11:42.279 { 00:11:42.279 "name": "nvmf_tgt_poll_group_002", 00:11:42.279 "admin_qpairs": 0, 00:11:42.279 "io_qpairs": 0, 00:11:42.279 "current_admin_qpairs": 0, 00:11:42.279 "current_io_qpairs": 0, 00:11:42.279 "pending_bdev_io": 0, 00:11:42.279 "completed_nvme_io": 0, 00:11:42.279 "transports": [ 00:11:42.279 { 00:11:42.279 "trtype": "TCP" 00:11:42.279 } 00:11:42.279 ] 00:11:42.279 }, 00:11:42.279 { 00:11:42.279 "name": "nvmf_tgt_poll_group_003", 00:11:42.279 "admin_qpairs": 0, 00:11:42.279 "io_qpairs": 0, 00:11:42.279 "current_admin_qpairs": 0, 00:11:42.279 "current_io_qpairs": 0, 00:11:42.279 "pending_bdev_io": 0, 00:11:42.279 "completed_nvme_io": 0, 00:11:42.279 "transports": [ 00:11:42.279 { 00:11:42.279 "trtype": "TCP" 00:11:42.279 } 00:11:42.279 ] 00:11:42.279 } 00:11:42.279 ] 00:11:42.279 }' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.279 Malloc1 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.279 [2024-11-20 16:13:13.287332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:42.279 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.280 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:42.280 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:42.280 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:42.280 [2024-11-20 16:13:13.311916] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:11:42.280 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:42.280 could not add new controller: failed to write to nvme-fabrics device 00:11:42.280 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:42.280 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:42.280 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:42.280 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:42.280 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:42.280 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.280 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.280 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.280 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.214 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:43.214 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:43.214 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.214 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:43.214 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.742 [2024-11-20 16:13:16.585059] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:11:45.742 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:45.742 could not add new controller: failed to write to nvme-fabrics device 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:45.742 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:45.743 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:45.743 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:45.743 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.743 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.743 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.743 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.676 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.676 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:46.676 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.676 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:46.676 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:48.577 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:48.577 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:48.577 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.577 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:48.577 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.577 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:48.577 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.835 [2024-11-20 16:13:19.909922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.835 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.210 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.210 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:50.210 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.210 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:50.210 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.111 [2024-11-20 16:13:23.264422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.111 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:53.483 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.483 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:53.483 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.483 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:53.483 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.377 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.634 [2024-11-20 16:13:26.614448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.634 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.001 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.001 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:57.001 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.001 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:57.001 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.899 [2024-11-20 16:13:29.947190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.899 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.900 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.274 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.274 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:00.274 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.274 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:00.274 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.174 [2024-11-20 16:13:33.302743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.174 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.548 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.548 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:03.548 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.548 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:03.548 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.448 [2024-11-20 16:13:36.620573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.448 [2024-11-20 16:13:36.668672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.448 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.707 [2024-11-20 16:13:36.716805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.707 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.708 [2024-11-20 16:13:36.764985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.708 [2024-11-20 16:13:36.813127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:05.708 "tick_rate": 2100000000, 00:12:05.708 "poll_groups": [ 00:12:05.708 { 00:12:05.708 "name": "nvmf_tgt_poll_group_000", 00:12:05.708 "admin_qpairs": 2, 00:12:05.708 "io_qpairs": 168, 00:12:05.708 "current_admin_qpairs": 0, 00:12:05.708 "current_io_qpairs": 0, 00:12:05.708 "pending_bdev_io": 0, 00:12:05.708 "completed_nvme_io": 266, 00:12:05.708 "transports": [ 00:12:05.708 { 00:12:05.708 "trtype": "TCP" 00:12:05.708 } 00:12:05.708 ] 00:12:05.708 }, 00:12:05.708 { 00:12:05.708 "name": "nvmf_tgt_poll_group_001", 00:12:05.708 "admin_qpairs": 2, 00:12:05.708 "io_qpairs": 168, 00:12:05.708 "current_admin_qpairs": 0, 00:12:05.708 "current_io_qpairs": 0, 00:12:05.708 "pending_bdev_io": 0, 00:12:05.708 "completed_nvme_io": 221, 00:12:05.708 "transports": [ 00:12:05.708 { 00:12:05.708 "trtype": "TCP" 00:12:05.708 } 00:12:05.708 ] 00:12:05.708 }, 00:12:05.708 { 00:12:05.708 "name": "nvmf_tgt_poll_group_002", 00:12:05.708 "admin_qpairs": 1, 00:12:05.708 "io_qpairs": 168, 00:12:05.708 "current_admin_qpairs": 0, 00:12:05.708 "current_io_qpairs": 0, 00:12:05.708 "pending_bdev_io": 0, 00:12:05.708 "completed_nvme_io": 366, 00:12:05.708 "transports": [ 00:12:05.708 { 00:12:05.708 "trtype": "TCP" 00:12:05.708 } 00:12:05.708 ] 00:12:05.708 }, 00:12:05.708 { 00:12:05.708 "name": "nvmf_tgt_poll_group_003", 00:12:05.708 "admin_qpairs": 2, 00:12:05.708 "io_qpairs": 168, 00:12:05.708 "current_admin_qpairs": 0, 00:12:05.708 "current_io_qpairs": 0, 00:12:05.708 "pending_bdev_io": 0, 00:12:05.708 "completed_nvme_io": 169, 00:12:05.708 "transports": [ 00:12:05.708 { 00:12:05.708 "trtype": "TCP" 00:12:05.708 } 00:12:05.708 ] 00:12:05.708 } 00:12:05.708 ] 00:12:05.708 }' 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:05.708 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:05.967 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:05.967 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:05.967 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:05.967 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:05.967 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:05.967 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:05.967 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.967 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:05.967 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.967 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.967 rmmod nvme_tcp 00:12:05.967 rmmod nvme_fabrics 00:12:05.967 rmmod nvme_keyring 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1853096 ']' 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1853096 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1853096 ']' 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1853096 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1853096 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1853096' 00:12:05.967 killing process with pid 1853096 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1853096 00:12:05.967 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1853096 00:12:06.226 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:06.226 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:06.226 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:06.226 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:06.226 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:06.226 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:06.226 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:06.226 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:06.226 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:06.226 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.226 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.226 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.131 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:08.131 00:12:08.131 real 0m32.891s 00:12:08.131 user 1m38.961s 00:12:08.131 sys 0m6.586s 00:12:08.131 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.131 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.131 ************************************ 00:12:08.131 END TEST nvmf_rpc 00:12:08.131 ************************************ 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.391 ************************************ 00:12:08.391 START TEST nvmf_invalid 00:12:08.391 ************************************ 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:08.391 * Looking for test storage... 00:12:08.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:08.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.391 --rc genhtml_branch_coverage=1 00:12:08.391 --rc genhtml_function_coverage=1 00:12:08.391 --rc genhtml_legend=1 00:12:08.391 --rc geninfo_all_blocks=1 00:12:08.391 --rc geninfo_unexecuted_blocks=1 00:12:08.391 00:12:08.391 ' 00:12:08.391 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:08.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.391 --rc genhtml_branch_coverage=1 00:12:08.391 --rc genhtml_function_coverage=1 00:12:08.391 --rc genhtml_legend=1 00:12:08.392 --rc geninfo_all_blocks=1 00:12:08.392 --rc geninfo_unexecuted_blocks=1 00:12:08.392 00:12:08.392 ' 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:08.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.392 --rc genhtml_branch_coverage=1 00:12:08.392 --rc genhtml_function_coverage=1 00:12:08.392 --rc genhtml_legend=1 00:12:08.392 --rc geninfo_all_blocks=1 00:12:08.392 --rc geninfo_unexecuted_blocks=1 00:12:08.392 00:12:08.392 ' 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:08.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.392 --rc genhtml_branch_coverage=1 00:12:08.392 --rc genhtml_function_coverage=1 00:12:08.392 --rc genhtml_legend=1 00:12:08.392 --rc geninfo_all_blocks=1 00:12:08.392 --rc geninfo_unexecuted_blocks=1 00:12:08.392 00:12:08.392 ' 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.392 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.652 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.221 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.221 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.221 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.221 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.221 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.221 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:15.222 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:15.222 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:15.222 Found net devices under 0000:86:00.0: cvl_0_0 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:15.222 Found net devices under 0000:86:00.1: cvl_0_1 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:12:15.222 00:12:15.222 --- 10.0.0.2 ping statistics --- 00:12:15.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.222 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:12:15.222 00:12:15.222 --- 10.0.0.1 ping statistics --- 00:12:15.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.222 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.222 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1860722 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1860722 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1860722 ']' 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.223 [2024-11-20 16:13:45.662824] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:12:15.223 [2024-11-20 16:13:45.662873] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.223 [2024-11-20 16:13:45.741666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.223 [2024-11-20 16:13:45.784476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.223 [2024-11-20 16:13:45.784515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.223 [2024-11-20 16:13:45.784522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.223 [2024-11-20 16:13:45.784528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.223 [2024-11-20 16:13:45.784532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.223 [2024-11-20 16:13:45.786142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.223 [2024-11-20 16:13:45.786255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.223 [2024-11-20 16:13:45.786307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.223 [2024-11-20 16:13:45.786308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:15.223 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1551 00:12:15.223 [2024-11-20 16:13:46.101210] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:15.223 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:15.223 { 00:12:15.223 "nqn": "nqn.2016-06.io.spdk:cnode1551", 00:12:15.223 "tgt_name": "foobar", 00:12:15.223 "method": "nvmf_create_subsystem", 00:12:15.223 "req_id": 1 00:12:15.223 } 00:12:15.223 Got JSON-RPC error response 00:12:15.223 response: 00:12:15.223 { 00:12:15.223 "code": -32603, 00:12:15.223 "message": "Unable to find target foobar" 00:12:15.223 }' 00:12:15.223 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:15.223 { 00:12:15.223 "nqn": "nqn.2016-06.io.spdk:cnode1551", 00:12:15.223 "tgt_name": "foobar", 00:12:15.223 "method": "nvmf_create_subsystem", 00:12:15.223 "req_id": 1 00:12:15.223 } 00:12:15.223 Got JSON-RPC error response 00:12:15.223 response: 00:12:15.223 { 00:12:15.223 "code": -32603, 00:12:15.223 "message": "Unable to find target foobar" 00:12:15.223 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:15.223 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:15.223 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5884 00:12:15.223 [2024-11-20 16:13:46.317955] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5884: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:15.223 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:15.223 { 00:12:15.223 "nqn": "nqn.2016-06.io.spdk:cnode5884", 00:12:15.223 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:15.223 "method": "nvmf_create_subsystem", 00:12:15.223 "req_id": 1 00:12:15.223 } 00:12:15.223 Got JSON-RPC error response 00:12:15.223 response: 00:12:15.223 { 00:12:15.223 "code": -32602, 00:12:15.223 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:15.223 }' 00:12:15.223 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:15.223 { 00:12:15.223 "nqn": "nqn.2016-06.io.spdk:cnode5884", 00:12:15.223 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:15.223 "method": "nvmf_create_subsystem", 00:12:15.223 "req_id": 1 00:12:15.223 } 00:12:15.223 Got JSON-RPC error response 00:12:15.223 response: 00:12:15.223 { 00:12:15.223 "code": -32602, 00:12:15.223 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:15.223 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:15.223 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:15.223 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31194 00:12:15.481 [2024-11-20 16:13:46.518609] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31194: invalid model number 'SPDK_Controller' 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:15.481 { 00:12:15.481 "nqn": "nqn.2016-06.io.spdk:cnode31194", 00:12:15.481 "model_number": "SPDK_Controller\u001f", 00:12:15.481 "method": "nvmf_create_subsystem", 00:12:15.481 "req_id": 1 00:12:15.481 } 00:12:15.481 Got JSON-RPC error response 00:12:15.481 response: 00:12:15.481 { 00:12:15.481 "code": -32602, 00:12:15.481 "message": "Invalid MN SPDK_Controller\u001f" 00:12:15.481 }' 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:15.481 { 00:12:15.481 "nqn": "nqn.2016-06.io.spdk:cnode31194", 00:12:15.481 "model_number": "SPDK_Controller\u001f", 00:12:15.481 "method": "nvmf_create_subsystem", 00:12:15.481 "req_id": 1 00:12:15.481 } 00:12:15.481 Got JSON-RPC error response 00:12:15.481 response: 00:12:15.481 { 00:12:15.481 "code": -32602, 00:12:15.481 "message": "Invalid MN SPDK_Controller\u001f" 00:12:15.481 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.481 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ` == \- ]] 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '`duoducY5%,*%zw4Q?U~/' 00:12:15.482 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '`duoducY5%,*%zw4Q?U~/' nqn.2016-06.io.spdk:cnode29592 00:12:15.740 [2024-11-20 16:13:46.851801] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29592: invalid serial number '`duoducY5%,*%zw4Q?U~/' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:15.740 { 00:12:15.740 "nqn": "nqn.2016-06.io.spdk:cnode29592", 00:12:15.740 "serial_number": "`duoducY5%,*%zw4Q?U~/", 00:12:15.740 "method": "nvmf_create_subsystem", 00:12:15.740 "req_id": 1 00:12:15.740 } 00:12:15.740 Got JSON-RPC error response 00:12:15.740 response: 00:12:15.740 { 00:12:15.740 "code": -32602, 00:12:15.740 "message": "Invalid SN `duoducY5%,*%zw4Q?U~/" 00:12:15.740 }' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:15.740 { 00:12:15.740 "nqn": "nqn.2016-06.io.spdk:cnode29592", 00:12:15.740 "serial_number": "`duoducY5%,*%zw4Q?U~/", 00:12:15.740 "method": "nvmf_create_subsystem", 00:12:15.740 "req_id": 1 00:12:15.740 } 00:12:15.740 Got JSON-RPC error response 00:12:15.740 response: 00:12:15.740 { 00:12:15.740 "code": -32602, 00:12:15.740 "message": "Invalid SN `duoducY5%,*%zw4Q?U~/" 00:12:15.740 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.740 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:15.998 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:15.998 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:12:15.999 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'O|4 /dev/null' 00:12:18.317 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.223 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:20.223 00:12:20.223 real 0m12.030s 00:12:20.223 user 0m18.550s 00:12:20.223 sys 0m5.437s 00:12:20.223 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.223 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:20.223 ************************************ 00:12:20.223 END TEST nvmf_invalid 00:12:20.223 ************************************ 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.483 ************************************ 00:12:20.483 START TEST nvmf_connect_stress 00:12:20.483 ************************************ 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:20.483 * Looking for test storage... 00:12:20.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:20.483 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:20.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.484 --rc genhtml_branch_coverage=1 00:12:20.484 --rc genhtml_function_coverage=1 00:12:20.484 --rc genhtml_legend=1 00:12:20.484 --rc geninfo_all_blocks=1 00:12:20.484 --rc geninfo_unexecuted_blocks=1 00:12:20.484 00:12:20.484 ' 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:20.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.484 --rc genhtml_branch_coverage=1 00:12:20.484 --rc genhtml_function_coverage=1 00:12:20.484 --rc genhtml_legend=1 00:12:20.484 --rc geninfo_all_blocks=1 00:12:20.484 --rc geninfo_unexecuted_blocks=1 00:12:20.484 00:12:20.484 ' 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:20.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.484 --rc genhtml_branch_coverage=1 00:12:20.484 --rc genhtml_function_coverage=1 00:12:20.484 --rc genhtml_legend=1 00:12:20.484 --rc geninfo_all_blocks=1 00:12:20.484 --rc geninfo_unexecuted_blocks=1 00:12:20.484 00:12:20.484 ' 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:20.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.484 --rc genhtml_branch_coverage=1 00:12:20.484 --rc genhtml_function_coverage=1 00:12:20.484 --rc genhtml_legend=1 00:12:20.484 --rc geninfo_all_blocks=1 00:12:20.484 --rc geninfo_unexecuted_blocks=1 00:12:20.484 00:12:20.484 ' 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.484 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.744 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.316 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:27.316 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:27.317 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:27.317 Found net devices under 0000:86:00.0: cvl_0_0 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:27.317 Found net devices under 0000:86:00.1: cvl_0_1 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:27.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:12:27.317 00:12:27.317 --- 10.0.0.2 ping statistics --- 00:12:27.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.317 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:12:27.317 00:12:27.317 --- 10.0.0.1 ping statistics --- 00:12:27.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.317 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1865110 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1865110 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1865110 ']' 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.317 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.317 [2024-11-20 16:13:57.800118] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:12:27.317 [2024-11-20 16:13:57.800161] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.317 [2024-11-20 16:13:57.880237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:27.317 [2024-11-20 16:13:57.919097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.317 [2024-11-20 16:13:57.919134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.317 [2024-11-20 16:13:57.919141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.317 [2024-11-20 16:13:57.919147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.317 [2024-11-20 16:13:57.919152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.317 [2024-11-20 16:13:57.920516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.317 [2024-11-20 16:13:57.920604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.318 [2024-11-20 16:13:57.920604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.576 [2024-11-20 16:13:58.671291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.576 [2024-11-20 16:13:58.691501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.576 NULL1 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1865201 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.576 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.142 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.142 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:28.142 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.142 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.142 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.401 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.401 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:28.401 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.401 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.401 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.657 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.657 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:28.657 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.657 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.657 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.914 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.914 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:28.914 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.914 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.914 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.478 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.478 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:29.478 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.478 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.478 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.735 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.735 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:29.735 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.735 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.735 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.992 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.992 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:29.992 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.992 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.992 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.250 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.250 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:30.250 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.250 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.250 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.507 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.507 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:30.507 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.507 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.507 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.070 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.070 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:31.070 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.070 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.070 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.327 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.327 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:31.327 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.327 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.327 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.584 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.584 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:31.584 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.584 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.584 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.840 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.841 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:31.841 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.841 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.841 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.404 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.404 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:32.404 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.404 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.404 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.661 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.661 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:32.661 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.661 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.661 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.918 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.918 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:32.918 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.918 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.918 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.176 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.176 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:33.176 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.176 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.176 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.434 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.434 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:33.434 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.434 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.434 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.999 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.999 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:33.999 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.999 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.999 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.256 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.256 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:34.256 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.256 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.256 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.515 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.515 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:34.515 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.515 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.515 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.772 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.772 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:34.772 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.772 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.772 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.336 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.336 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:35.336 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.336 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.336 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.594 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.594 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:35.594 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.594 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.594 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.851 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.851 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:35.851 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.851 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.851 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.109 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.109 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:36.109 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.109 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.109 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.368 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.368 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:36.368 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.368 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.368 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.932 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.932 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:36.932 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.932 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.932 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.189 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.190 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:37.190 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.190 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.190 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.446 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.446 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:37.446 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.446 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.446 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.703 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:37.703 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.703 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1865201 00:12:37.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1865201) - No such process 00:12:37.703 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1865201 00:12:37.703 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:37.703 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:37.703 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:37.703 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.703 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:37.703 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.703 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:37.703 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.703 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.703 rmmod nvme_tcp 00:12:37.703 rmmod nvme_fabrics 00:12:37.963 rmmod nvme_keyring 00:12:37.963 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.963 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:37.963 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:37.963 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1865110 ']' 00:12:37.963 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1865110 00:12:37.963 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1865110 ']' 00:12:37.963 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1865110 00:12:37.963 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:37.963 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.963 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1865110 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1865110' 00:12:37.963 killing process with pid 1865110 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1865110 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1865110 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.963 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:40.508 00:12:40.508 real 0m19.726s 00:12:40.508 user 0m41.321s 00:12:40.508 sys 0m8.625s 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.508 ************************************ 00:12:40.508 END TEST nvmf_connect_stress 00:12:40.508 ************************************ 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:40.508 ************************************ 00:12:40.508 START TEST nvmf_fused_ordering 00:12:40.508 ************************************ 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:40.508 * Looking for test storage... 00:12:40.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:40.508 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:40.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.509 --rc genhtml_branch_coverage=1 00:12:40.509 --rc genhtml_function_coverage=1 00:12:40.509 --rc genhtml_legend=1 00:12:40.509 --rc geninfo_all_blocks=1 00:12:40.509 --rc geninfo_unexecuted_blocks=1 00:12:40.509 00:12:40.509 ' 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:40.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.509 --rc genhtml_branch_coverage=1 00:12:40.509 --rc genhtml_function_coverage=1 00:12:40.509 --rc genhtml_legend=1 00:12:40.509 --rc geninfo_all_blocks=1 00:12:40.509 --rc geninfo_unexecuted_blocks=1 00:12:40.509 00:12:40.509 ' 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:40.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.509 --rc genhtml_branch_coverage=1 00:12:40.509 --rc genhtml_function_coverage=1 00:12:40.509 --rc genhtml_legend=1 00:12:40.509 --rc geninfo_all_blocks=1 00:12:40.509 --rc geninfo_unexecuted_blocks=1 00:12:40.509 00:12:40.509 ' 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:40.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.509 --rc genhtml_branch_coverage=1 00:12:40.509 --rc genhtml_function_coverage=1 00:12:40.509 --rc genhtml_legend=1 00:12:40.509 --rc geninfo_all_blocks=1 00:12:40.509 --rc geninfo_unexecuted_blocks=1 00:12:40.509 00:12:40.509 ' 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:40.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:40.509 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:47.144 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:47.145 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:47.145 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:47.145 Found net devices under 0000:86:00.0: cvl_0_0 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:47.145 Found net devices under 0000:86:00.1: cvl_0_1 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:47.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:12:47.145 00:12:47.145 --- 10.0.0.2 ping statistics --- 00:12:47.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.145 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:12:47.145 00:12:47.145 --- 10.0.0.1 ping statistics --- 00:12:47.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.145 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1870515 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1870515 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1870515 ']' 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.145 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:47.145 [2024-11-20 16:14:17.595885] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:12:47.145 [2024-11-20 16:14:17.595927] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.145 [2024-11-20 16:14:17.675249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.145 [2024-11-20 16:14:17.715425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.146 [2024-11-20 16:14:17.715463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.146 [2024-11-20 16:14:17.715470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.146 [2024-11-20 16:14:17.715476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.146 [2024-11-20 16:14:17.715481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.146 [2024-11-20 16:14:17.716013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:47.146 [2024-11-20 16:14:17.858429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:47.146 [2024-11-20 16:14:17.878619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:47.146 NULL1 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.146 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:47.146 [2024-11-20 16:14:17.936163] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:12:47.146 [2024-11-20 16:14:17.936194] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1870540 ] 00:12:47.146 Attached to nqn.2016-06.io.spdk:cnode1 00:12:47.146 Namespace ID: 1 size: 1GB 00:12:47.146 fused_ordering(0) 00:12:47.146 fused_ordering(1) 00:12:47.146 fused_ordering(2) 00:12:47.146 fused_ordering(3) 00:12:47.146 fused_ordering(4) 00:12:47.146 fused_ordering(5) 00:12:47.146 fused_ordering(6) 00:12:47.146 fused_ordering(7) 00:12:47.146 fused_ordering(8) 00:12:47.146 fused_ordering(9) 00:12:47.146 fused_ordering(10) 00:12:47.146 fused_ordering(11) 00:12:47.146 fused_ordering(12) 00:12:47.146 fused_ordering(13) 00:12:47.146 fused_ordering(14) 00:12:47.146 fused_ordering(15) 00:12:47.146 fused_ordering(16) 00:12:47.146 fused_ordering(17) 00:12:47.146 fused_ordering(18) 00:12:47.146 fused_ordering(19) 00:12:47.146 fused_ordering(20) 00:12:47.146 fused_ordering(21) 00:12:47.146 fused_ordering(22) 00:12:47.146 fused_ordering(23) 00:12:47.146 fused_ordering(24) 00:12:47.146 fused_ordering(25) 00:12:47.146 fused_ordering(26) 00:12:47.146 fused_ordering(27) 00:12:47.146 fused_ordering(28) 00:12:47.146 fused_ordering(29) 00:12:47.146 fused_ordering(30) 00:12:47.146 fused_ordering(31) 00:12:47.146 fused_ordering(32) 00:12:47.146 fused_ordering(33) 00:12:47.146 fused_ordering(34) 00:12:47.146 fused_ordering(35) 00:12:47.146 fused_ordering(36) 00:12:47.146 fused_ordering(37) 00:12:47.146 fused_ordering(38) 00:12:47.146 fused_ordering(39) 00:12:47.146 fused_ordering(40) 00:12:47.146 fused_ordering(41) 00:12:47.146 fused_ordering(42) 00:12:47.146 fused_ordering(43) 00:12:47.146 fused_ordering(44) 00:12:47.146 fused_ordering(45) 00:12:47.146 fused_ordering(46) 00:12:47.146 fused_ordering(47) 00:12:47.146 fused_ordering(48) 00:12:47.146 fused_ordering(49) 00:12:47.146 fused_ordering(50) 00:12:47.146 fused_ordering(51) 00:12:47.146 fused_ordering(52) 00:12:47.146 fused_ordering(53) 00:12:47.146 fused_ordering(54) 00:12:47.146 fused_ordering(55) 00:12:47.146 fused_ordering(56) 00:12:47.146 fused_ordering(57) 00:12:47.146 fused_ordering(58) 00:12:47.146 fused_ordering(59) 00:12:47.146 fused_ordering(60) 00:12:47.146 fused_ordering(61) 00:12:47.146 fused_ordering(62) 00:12:47.146 fused_ordering(63) 00:12:47.146 fused_ordering(64) 00:12:47.146 fused_ordering(65) 00:12:47.146 fused_ordering(66) 00:12:47.146 fused_ordering(67) 00:12:47.146 fused_ordering(68) 00:12:47.146 fused_ordering(69) 00:12:47.146 fused_ordering(70) 00:12:47.146 fused_ordering(71) 00:12:47.146 fused_ordering(72) 00:12:47.146 fused_ordering(73) 00:12:47.146 fused_ordering(74) 00:12:47.146 fused_ordering(75) 00:12:47.146 fused_ordering(76) 00:12:47.146 fused_ordering(77) 00:12:47.146 fused_ordering(78) 00:12:47.146 fused_ordering(79) 00:12:47.146 fused_ordering(80) 00:12:47.146 fused_ordering(81) 00:12:47.146 fused_ordering(82) 00:12:47.146 fused_ordering(83) 00:12:47.146 fused_ordering(84) 00:12:47.146 fused_ordering(85) 00:12:47.146 fused_ordering(86) 00:12:47.146 fused_ordering(87) 00:12:47.146 fused_ordering(88) 00:12:47.146 fused_ordering(89) 00:12:47.146 fused_ordering(90) 00:12:47.146 fused_ordering(91) 00:12:47.146 fused_ordering(92) 00:12:47.146 fused_ordering(93) 00:12:47.146 fused_ordering(94) 00:12:47.146 fused_ordering(95) 00:12:47.146 fused_ordering(96) 00:12:47.146 fused_ordering(97) 00:12:47.146 fused_ordering(98) 00:12:47.146 fused_ordering(99) 00:12:47.146 fused_ordering(100) 00:12:47.146 fused_ordering(101) 00:12:47.146 fused_ordering(102) 00:12:47.146 fused_ordering(103) 00:12:47.146 fused_ordering(104) 00:12:47.146 fused_ordering(105) 00:12:47.146 fused_ordering(106) 00:12:47.146 fused_ordering(107) 00:12:47.146 fused_ordering(108) 00:12:47.146 fused_ordering(109) 00:12:47.146 fused_ordering(110) 00:12:47.146 fused_ordering(111) 00:12:47.146 fused_ordering(112) 00:12:47.146 fused_ordering(113) 00:12:47.146 fused_ordering(114) 00:12:47.146 fused_ordering(115) 00:12:47.146 fused_ordering(116) 00:12:47.146 fused_ordering(117) 00:12:47.146 fused_ordering(118) 00:12:47.146 fused_ordering(119) 00:12:47.146 fused_ordering(120) 00:12:47.146 fused_ordering(121) 00:12:47.146 fused_ordering(122) 00:12:47.146 fused_ordering(123) 00:12:47.146 fused_ordering(124) 00:12:47.146 fused_ordering(125) 00:12:47.146 fused_ordering(126) 00:12:47.146 fused_ordering(127) 00:12:47.146 fused_ordering(128) 00:12:47.146 fused_ordering(129) 00:12:47.146 fused_ordering(130) 00:12:47.146 fused_ordering(131) 00:12:47.146 fused_ordering(132) 00:12:47.146 fused_ordering(133) 00:12:47.146 fused_ordering(134) 00:12:47.146 fused_ordering(135) 00:12:47.146 fused_ordering(136) 00:12:47.146 fused_ordering(137) 00:12:47.146 fused_ordering(138) 00:12:47.146 fused_ordering(139) 00:12:47.146 fused_ordering(140) 00:12:47.146 fused_ordering(141) 00:12:47.146 fused_ordering(142) 00:12:47.146 fused_ordering(143) 00:12:47.146 fused_ordering(144) 00:12:47.146 fused_ordering(145) 00:12:47.146 fused_ordering(146) 00:12:47.146 fused_ordering(147) 00:12:47.146 fused_ordering(148) 00:12:47.146 fused_ordering(149) 00:12:47.146 fused_ordering(150) 00:12:47.146 fused_ordering(151) 00:12:47.147 fused_ordering(152) 00:12:47.147 fused_ordering(153) 00:12:47.147 fused_ordering(154) 00:12:47.147 fused_ordering(155) 00:12:47.147 fused_ordering(156) 00:12:47.147 fused_ordering(157) 00:12:47.147 fused_ordering(158) 00:12:47.147 fused_ordering(159) 00:12:47.147 fused_ordering(160) 00:12:47.147 fused_ordering(161) 00:12:47.147 fused_ordering(162) 00:12:47.147 fused_ordering(163) 00:12:47.147 fused_ordering(164) 00:12:47.147 fused_ordering(165) 00:12:47.147 fused_ordering(166) 00:12:47.147 fused_ordering(167) 00:12:47.147 fused_ordering(168) 00:12:47.147 fused_ordering(169) 00:12:47.147 fused_ordering(170) 00:12:47.147 fused_ordering(171) 00:12:47.147 fused_ordering(172) 00:12:47.147 fused_ordering(173) 00:12:47.147 fused_ordering(174) 00:12:47.147 fused_ordering(175) 00:12:47.147 fused_ordering(176) 00:12:47.147 fused_ordering(177) 00:12:47.147 fused_ordering(178) 00:12:47.147 fused_ordering(179) 00:12:47.147 fused_ordering(180) 00:12:47.147 fused_ordering(181) 00:12:47.147 fused_ordering(182) 00:12:47.147 fused_ordering(183) 00:12:47.147 fused_ordering(184) 00:12:47.147 fused_ordering(185) 00:12:47.147 fused_ordering(186) 00:12:47.147 fused_ordering(187) 00:12:47.147 fused_ordering(188) 00:12:47.147 fused_ordering(189) 00:12:47.147 fused_ordering(190) 00:12:47.147 fused_ordering(191) 00:12:47.147 fused_ordering(192) 00:12:47.147 fused_ordering(193) 00:12:47.147 fused_ordering(194) 00:12:47.147 fused_ordering(195) 00:12:47.147 fused_ordering(196) 00:12:47.147 fused_ordering(197) 00:12:47.147 fused_ordering(198) 00:12:47.147 fused_ordering(199) 00:12:47.147 fused_ordering(200) 00:12:47.147 fused_ordering(201) 00:12:47.147 fused_ordering(202) 00:12:47.147 fused_ordering(203) 00:12:47.147 fused_ordering(204) 00:12:47.147 fused_ordering(205) 00:12:47.450 fused_ordering(206) 00:12:47.450 fused_ordering(207) 00:12:47.450 fused_ordering(208) 00:12:47.450 fused_ordering(209) 00:12:47.450 fused_ordering(210) 00:12:47.450 fused_ordering(211) 00:12:47.450 fused_ordering(212) 00:12:47.450 fused_ordering(213) 00:12:47.450 fused_ordering(214) 00:12:47.450 fused_ordering(215) 00:12:47.450 fused_ordering(216) 00:12:47.450 fused_ordering(217) 00:12:47.450 fused_ordering(218) 00:12:47.450 fused_ordering(219) 00:12:47.450 fused_ordering(220) 00:12:47.450 fused_ordering(221) 00:12:47.450 fused_ordering(222) 00:12:47.450 fused_ordering(223) 00:12:47.450 fused_ordering(224) 00:12:47.450 fused_ordering(225) 00:12:47.450 fused_ordering(226) 00:12:47.450 fused_ordering(227) 00:12:47.450 fused_ordering(228) 00:12:47.450 fused_ordering(229) 00:12:47.450 fused_ordering(230) 00:12:47.450 fused_ordering(231) 00:12:47.450 fused_ordering(232) 00:12:47.450 fused_ordering(233) 00:12:47.450 fused_ordering(234) 00:12:47.450 fused_ordering(235) 00:12:47.450 fused_ordering(236) 00:12:47.450 fused_ordering(237) 00:12:47.450 fused_ordering(238) 00:12:47.450 fused_ordering(239) 00:12:47.450 fused_ordering(240) 00:12:47.450 fused_ordering(241) 00:12:47.450 fused_ordering(242) 00:12:47.450 fused_ordering(243) 00:12:47.450 fused_ordering(244) 00:12:47.450 fused_ordering(245) 00:12:47.450 fused_ordering(246) 00:12:47.451 fused_ordering(247) 00:12:47.451 fused_ordering(248) 00:12:47.451 fused_ordering(249) 00:12:47.451 fused_ordering(250) 00:12:47.451 fused_ordering(251) 00:12:47.451 fused_ordering(252) 00:12:47.451 fused_ordering(253) 00:12:47.451 fused_ordering(254) 00:12:47.451 fused_ordering(255) 00:12:47.451 fused_ordering(256) 00:12:47.451 fused_ordering(257) 00:12:47.451 fused_ordering(258) 00:12:47.451 fused_ordering(259) 00:12:47.451 fused_ordering(260) 00:12:47.451 fused_ordering(261) 00:12:47.451 fused_ordering(262) 00:12:47.451 fused_ordering(263) 00:12:47.451 fused_ordering(264) 00:12:47.451 fused_ordering(265) 00:12:47.451 fused_ordering(266) 00:12:47.451 fused_ordering(267) 00:12:47.451 fused_ordering(268) 00:12:47.451 fused_ordering(269) 00:12:47.451 fused_ordering(270) 00:12:47.451 fused_ordering(271) 00:12:47.451 fused_ordering(272) 00:12:47.451 fused_ordering(273) 00:12:47.451 fused_ordering(274) 00:12:47.451 fused_ordering(275) 00:12:47.451 fused_ordering(276) 00:12:47.451 fused_ordering(277) 00:12:47.451 fused_ordering(278) 00:12:47.451 fused_ordering(279) 00:12:47.451 fused_ordering(280) 00:12:47.451 fused_ordering(281) 00:12:47.451 fused_ordering(282) 00:12:47.451 fused_ordering(283) 00:12:47.451 fused_ordering(284) 00:12:47.451 fused_ordering(285) 00:12:47.451 fused_ordering(286) 00:12:47.451 fused_ordering(287) 00:12:47.451 fused_ordering(288) 00:12:47.451 fused_ordering(289) 00:12:47.451 fused_ordering(290) 00:12:47.451 fused_ordering(291) 00:12:47.451 fused_ordering(292) 00:12:47.451 fused_ordering(293) 00:12:47.451 fused_ordering(294) 00:12:47.451 fused_ordering(295) 00:12:47.451 fused_ordering(296) 00:12:47.451 fused_ordering(297) 00:12:47.451 fused_ordering(298) 00:12:47.451 fused_ordering(299) 00:12:47.451 fused_ordering(300) 00:12:47.451 fused_ordering(301) 00:12:47.451 fused_ordering(302) 00:12:47.451 fused_ordering(303) 00:12:47.451 fused_ordering(304) 00:12:47.451 fused_ordering(305) 00:12:47.451 fused_ordering(306) 00:12:47.451 fused_ordering(307) 00:12:47.451 fused_ordering(308) 00:12:47.451 fused_ordering(309) 00:12:47.451 fused_ordering(310) 00:12:47.451 fused_ordering(311) 00:12:47.451 fused_ordering(312) 00:12:47.451 fused_ordering(313) 00:12:47.451 fused_ordering(314) 00:12:47.451 fused_ordering(315) 00:12:47.451 fused_ordering(316) 00:12:47.451 fused_ordering(317) 00:12:47.451 fused_ordering(318) 00:12:47.451 fused_ordering(319) 00:12:47.451 fused_ordering(320) 00:12:47.451 fused_ordering(321) 00:12:47.451 fused_ordering(322) 00:12:47.451 fused_ordering(323) 00:12:47.451 fused_ordering(324) 00:12:47.451 fused_ordering(325) 00:12:47.451 fused_ordering(326) 00:12:47.451 fused_ordering(327) 00:12:47.451 fused_ordering(328) 00:12:47.451 fused_ordering(329) 00:12:47.451 fused_ordering(330) 00:12:47.451 fused_ordering(331) 00:12:47.451 fused_ordering(332) 00:12:47.451 fused_ordering(333) 00:12:47.451 fused_ordering(334) 00:12:47.451 fused_ordering(335) 00:12:47.451 fused_ordering(336) 00:12:47.451 fused_ordering(337) 00:12:47.451 fused_ordering(338) 00:12:47.451 fused_ordering(339) 00:12:47.451 fused_ordering(340) 00:12:47.451 fused_ordering(341) 00:12:47.451 fused_ordering(342) 00:12:47.451 fused_ordering(343) 00:12:47.451 fused_ordering(344) 00:12:47.451 fused_ordering(345) 00:12:47.451 fused_ordering(346) 00:12:47.451 fused_ordering(347) 00:12:47.451 fused_ordering(348) 00:12:47.451 fused_ordering(349) 00:12:47.451 fused_ordering(350) 00:12:47.451 fused_ordering(351) 00:12:47.451 fused_ordering(352) 00:12:47.451 fused_ordering(353) 00:12:47.451 fused_ordering(354) 00:12:47.451 fused_ordering(355) 00:12:47.451 fused_ordering(356) 00:12:47.451 fused_ordering(357) 00:12:47.451 fused_ordering(358) 00:12:47.451 fused_ordering(359) 00:12:47.451 fused_ordering(360) 00:12:47.451 fused_ordering(361) 00:12:47.451 fused_ordering(362) 00:12:47.451 fused_ordering(363) 00:12:47.451 fused_ordering(364) 00:12:47.451 fused_ordering(365) 00:12:47.451 fused_ordering(366) 00:12:47.451 fused_ordering(367) 00:12:47.451 fused_ordering(368) 00:12:47.451 fused_ordering(369) 00:12:47.451 fused_ordering(370) 00:12:47.451 fused_ordering(371) 00:12:47.451 fused_ordering(372) 00:12:47.451 fused_ordering(373) 00:12:47.451 fused_ordering(374) 00:12:47.451 fused_ordering(375) 00:12:47.451 fused_ordering(376) 00:12:47.451 fused_ordering(377) 00:12:47.451 fused_ordering(378) 00:12:47.451 fused_ordering(379) 00:12:47.451 fused_ordering(380) 00:12:47.451 fused_ordering(381) 00:12:47.451 fused_ordering(382) 00:12:47.451 fused_ordering(383) 00:12:47.451 fused_ordering(384) 00:12:47.451 fused_ordering(385) 00:12:47.451 fused_ordering(386) 00:12:47.451 fused_ordering(387) 00:12:47.451 fused_ordering(388) 00:12:47.451 fused_ordering(389) 00:12:47.451 fused_ordering(390) 00:12:47.451 fused_ordering(391) 00:12:47.451 fused_ordering(392) 00:12:47.451 fused_ordering(393) 00:12:47.451 fused_ordering(394) 00:12:47.451 fused_ordering(395) 00:12:47.451 fused_ordering(396) 00:12:47.451 fused_ordering(397) 00:12:47.451 fused_ordering(398) 00:12:47.451 fused_ordering(399) 00:12:47.451 fused_ordering(400) 00:12:47.451 fused_ordering(401) 00:12:47.451 fused_ordering(402) 00:12:47.451 fused_ordering(403) 00:12:47.451 fused_ordering(404) 00:12:47.451 fused_ordering(405) 00:12:47.451 fused_ordering(406) 00:12:47.451 fused_ordering(407) 00:12:47.451 fused_ordering(408) 00:12:47.451 fused_ordering(409) 00:12:47.451 fused_ordering(410) 00:12:47.756 fused_ordering(411) 00:12:47.756 fused_ordering(412) 00:12:47.756 fused_ordering(413) 00:12:47.756 fused_ordering(414) 00:12:47.756 fused_ordering(415) 00:12:47.756 fused_ordering(416) 00:12:47.756 fused_ordering(417) 00:12:47.756 fused_ordering(418) 00:12:47.756 fused_ordering(419) 00:12:47.756 fused_ordering(420) 00:12:47.756 fused_ordering(421) 00:12:47.756 fused_ordering(422) 00:12:47.756 fused_ordering(423) 00:12:47.756 fused_ordering(424) 00:12:47.756 fused_ordering(425) 00:12:47.756 fused_ordering(426) 00:12:47.756 fused_ordering(427) 00:12:47.756 fused_ordering(428) 00:12:47.756 fused_ordering(429) 00:12:47.756 fused_ordering(430) 00:12:47.756 fused_ordering(431) 00:12:47.756 fused_ordering(432) 00:12:47.756 fused_ordering(433) 00:12:47.756 fused_ordering(434) 00:12:47.756 fused_ordering(435) 00:12:47.756 fused_ordering(436) 00:12:47.756 fused_ordering(437) 00:12:47.756 fused_ordering(438) 00:12:47.756 fused_ordering(439) 00:12:47.756 fused_ordering(440) 00:12:47.756 fused_ordering(441) 00:12:47.756 fused_ordering(442) 00:12:47.756 fused_ordering(443) 00:12:47.756 fused_ordering(444) 00:12:47.756 fused_ordering(445) 00:12:47.756 fused_ordering(446) 00:12:47.756 fused_ordering(447) 00:12:47.756 fused_ordering(448) 00:12:47.756 fused_ordering(449) 00:12:47.756 fused_ordering(450) 00:12:47.756 fused_ordering(451) 00:12:47.756 fused_ordering(452) 00:12:47.756 fused_ordering(453) 00:12:47.756 fused_ordering(454) 00:12:47.756 fused_ordering(455) 00:12:47.756 fused_ordering(456) 00:12:47.756 fused_ordering(457) 00:12:47.756 fused_ordering(458) 00:12:47.756 fused_ordering(459) 00:12:47.756 fused_ordering(460) 00:12:47.756 fused_ordering(461) 00:12:47.756 fused_ordering(462) 00:12:47.756 fused_ordering(463) 00:12:47.756 fused_ordering(464) 00:12:47.756 fused_ordering(465) 00:12:47.756 fused_ordering(466) 00:12:47.756 fused_ordering(467) 00:12:47.756 fused_ordering(468) 00:12:47.756 fused_ordering(469) 00:12:47.756 fused_ordering(470) 00:12:47.756 fused_ordering(471) 00:12:47.756 fused_ordering(472) 00:12:47.756 fused_ordering(473) 00:12:47.756 fused_ordering(474) 00:12:47.756 fused_ordering(475) 00:12:47.756 fused_ordering(476) 00:12:47.756 fused_ordering(477) 00:12:47.756 fused_ordering(478) 00:12:47.757 fused_ordering(479) 00:12:47.757 fused_ordering(480) 00:12:47.757 fused_ordering(481) 00:12:47.757 fused_ordering(482) 00:12:47.757 fused_ordering(483) 00:12:47.757 fused_ordering(484) 00:12:47.757 fused_ordering(485) 00:12:47.757 fused_ordering(486) 00:12:47.757 fused_ordering(487) 00:12:47.757 fused_ordering(488) 00:12:47.757 fused_ordering(489) 00:12:47.757 fused_ordering(490) 00:12:47.757 fused_ordering(491) 00:12:47.757 fused_ordering(492) 00:12:47.757 fused_ordering(493) 00:12:47.757 fused_ordering(494) 00:12:47.757 fused_ordering(495) 00:12:47.757 fused_ordering(496) 00:12:47.757 fused_ordering(497) 00:12:47.757 fused_ordering(498) 00:12:47.757 fused_ordering(499) 00:12:47.757 fused_ordering(500) 00:12:47.757 fused_ordering(501) 00:12:47.757 fused_ordering(502) 00:12:47.757 fused_ordering(503) 00:12:47.757 fused_ordering(504) 00:12:47.757 fused_ordering(505) 00:12:47.757 fused_ordering(506) 00:12:47.757 fused_ordering(507) 00:12:47.757 fused_ordering(508) 00:12:47.757 fused_ordering(509) 00:12:47.757 fused_ordering(510) 00:12:47.757 fused_ordering(511) 00:12:47.757 fused_ordering(512) 00:12:47.757 fused_ordering(513) 00:12:47.757 fused_ordering(514) 00:12:47.757 fused_ordering(515) 00:12:47.757 fused_ordering(516) 00:12:47.757 fused_ordering(517) 00:12:47.757 fused_ordering(518) 00:12:47.757 fused_ordering(519) 00:12:47.757 fused_ordering(520) 00:12:47.757 fused_ordering(521) 00:12:47.757 fused_ordering(522) 00:12:47.757 fused_ordering(523) 00:12:47.757 fused_ordering(524) 00:12:47.757 fused_ordering(525) 00:12:47.757 fused_ordering(526) 00:12:47.757 fused_ordering(527) 00:12:47.757 fused_ordering(528) 00:12:47.757 fused_ordering(529) 00:12:47.757 fused_ordering(530) 00:12:47.757 fused_ordering(531) 00:12:47.757 fused_ordering(532) 00:12:47.757 fused_ordering(533) 00:12:47.757 fused_ordering(534) 00:12:47.757 fused_ordering(535) 00:12:47.757 fused_ordering(536) 00:12:47.757 fused_ordering(537) 00:12:47.757 fused_ordering(538) 00:12:47.757 fused_ordering(539) 00:12:47.757 fused_ordering(540) 00:12:47.757 fused_ordering(541) 00:12:47.757 fused_ordering(542) 00:12:47.757 fused_ordering(543) 00:12:47.757 fused_ordering(544) 00:12:47.757 fused_ordering(545) 00:12:47.757 fused_ordering(546) 00:12:47.757 fused_ordering(547) 00:12:47.757 fused_ordering(548) 00:12:47.757 fused_ordering(549) 00:12:47.757 fused_ordering(550) 00:12:47.757 fused_ordering(551) 00:12:47.757 fused_ordering(552) 00:12:47.757 fused_ordering(553) 00:12:47.757 fused_ordering(554) 00:12:47.757 fused_ordering(555) 00:12:47.757 fused_ordering(556) 00:12:47.757 fused_ordering(557) 00:12:47.757 fused_ordering(558) 00:12:47.757 fused_ordering(559) 00:12:47.757 fused_ordering(560) 00:12:47.757 fused_ordering(561) 00:12:47.757 fused_ordering(562) 00:12:47.757 fused_ordering(563) 00:12:47.757 fused_ordering(564) 00:12:47.757 fused_ordering(565) 00:12:47.757 fused_ordering(566) 00:12:47.757 fused_ordering(567) 00:12:47.757 fused_ordering(568) 00:12:47.757 fused_ordering(569) 00:12:47.757 fused_ordering(570) 00:12:47.757 fused_ordering(571) 00:12:47.757 fused_ordering(572) 00:12:47.757 fused_ordering(573) 00:12:47.757 fused_ordering(574) 00:12:47.757 fused_ordering(575) 00:12:47.757 fused_ordering(576) 00:12:47.757 fused_ordering(577) 00:12:47.757 fused_ordering(578) 00:12:47.757 fused_ordering(579) 00:12:47.757 fused_ordering(580) 00:12:47.757 fused_ordering(581) 00:12:47.757 fused_ordering(582) 00:12:47.757 fused_ordering(583) 00:12:47.757 fused_ordering(584) 00:12:47.757 fused_ordering(585) 00:12:47.757 fused_ordering(586) 00:12:47.757 fused_ordering(587) 00:12:47.757 fused_ordering(588) 00:12:47.757 fused_ordering(589) 00:12:47.757 fused_ordering(590) 00:12:47.757 fused_ordering(591) 00:12:47.757 fused_ordering(592) 00:12:47.757 fused_ordering(593) 00:12:47.757 fused_ordering(594) 00:12:47.757 fused_ordering(595) 00:12:47.757 fused_ordering(596) 00:12:47.757 fused_ordering(597) 00:12:47.757 fused_ordering(598) 00:12:47.757 fused_ordering(599) 00:12:47.757 fused_ordering(600) 00:12:47.757 fused_ordering(601) 00:12:47.757 fused_ordering(602) 00:12:47.757 fused_ordering(603) 00:12:47.757 fused_ordering(604) 00:12:47.757 fused_ordering(605) 00:12:47.757 fused_ordering(606) 00:12:47.757 fused_ordering(607) 00:12:47.757 fused_ordering(608) 00:12:47.757 fused_ordering(609) 00:12:47.757 fused_ordering(610) 00:12:47.757 fused_ordering(611) 00:12:47.757 fused_ordering(612) 00:12:47.757 fused_ordering(613) 00:12:47.757 fused_ordering(614) 00:12:47.757 fused_ordering(615) 00:12:48.060 fused_ordering(616) 00:12:48.060 fused_ordering(617) 00:12:48.060 fused_ordering(618) 00:12:48.060 fused_ordering(619) 00:12:48.060 fused_ordering(620) 00:12:48.060 fused_ordering(621) 00:12:48.060 fused_ordering(622) 00:12:48.060 fused_ordering(623) 00:12:48.060 fused_ordering(624) 00:12:48.060 fused_ordering(625) 00:12:48.060 fused_ordering(626) 00:12:48.060 fused_ordering(627) 00:12:48.060 fused_ordering(628) 00:12:48.060 fused_ordering(629) 00:12:48.060 fused_ordering(630) 00:12:48.060 fused_ordering(631) 00:12:48.060 fused_ordering(632) 00:12:48.060 fused_ordering(633) 00:12:48.060 fused_ordering(634) 00:12:48.060 fused_ordering(635) 00:12:48.060 fused_ordering(636) 00:12:48.060 fused_ordering(637) 00:12:48.060 fused_ordering(638) 00:12:48.060 fused_ordering(639) 00:12:48.060 fused_ordering(640) 00:12:48.060 fused_ordering(641) 00:12:48.060 fused_ordering(642) 00:12:48.060 fused_ordering(643) 00:12:48.060 fused_ordering(644) 00:12:48.060 fused_ordering(645) 00:12:48.060 fused_ordering(646) 00:12:48.060 fused_ordering(647) 00:12:48.060 fused_ordering(648) 00:12:48.060 fused_ordering(649) 00:12:48.060 fused_ordering(650) 00:12:48.060 fused_ordering(651) 00:12:48.060 fused_ordering(652) 00:12:48.060 fused_ordering(653) 00:12:48.060 fused_ordering(654) 00:12:48.060 fused_ordering(655) 00:12:48.060 fused_ordering(656) 00:12:48.060 fused_ordering(657) 00:12:48.060 fused_ordering(658) 00:12:48.060 fused_ordering(659) 00:12:48.060 fused_ordering(660) 00:12:48.060 fused_ordering(661) 00:12:48.060 fused_ordering(662) 00:12:48.060 fused_ordering(663) 00:12:48.060 fused_ordering(664) 00:12:48.060 fused_ordering(665) 00:12:48.060 fused_ordering(666) 00:12:48.060 fused_ordering(667) 00:12:48.060 fused_ordering(668) 00:12:48.060 fused_ordering(669) 00:12:48.060 fused_ordering(670) 00:12:48.060 fused_ordering(671) 00:12:48.060 fused_ordering(672) 00:12:48.060 fused_ordering(673) 00:12:48.060 fused_ordering(674) 00:12:48.060 fused_ordering(675) 00:12:48.060 fused_ordering(676) 00:12:48.060 fused_ordering(677) 00:12:48.060 fused_ordering(678) 00:12:48.060 fused_ordering(679) 00:12:48.060 fused_ordering(680) 00:12:48.060 fused_ordering(681) 00:12:48.060 fused_ordering(682) 00:12:48.060 fused_ordering(683) 00:12:48.060 fused_ordering(684) 00:12:48.060 fused_ordering(685) 00:12:48.060 fused_ordering(686) 00:12:48.060 fused_ordering(687) 00:12:48.060 fused_ordering(688) 00:12:48.060 fused_ordering(689) 00:12:48.060 fused_ordering(690) 00:12:48.060 fused_ordering(691) 00:12:48.060 fused_ordering(692) 00:12:48.060 fused_ordering(693) 00:12:48.060 fused_ordering(694) 00:12:48.060 fused_ordering(695) 00:12:48.060 fused_ordering(696) 00:12:48.060 fused_ordering(697) 00:12:48.060 fused_ordering(698) 00:12:48.060 fused_ordering(699) 00:12:48.060 fused_ordering(700) 00:12:48.060 fused_ordering(701) 00:12:48.060 fused_ordering(702) 00:12:48.060 fused_ordering(703) 00:12:48.060 fused_ordering(704) 00:12:48.060 fused_ordering(705) 00:12:48.060 fused_ordering(706) 00:12:48.060 fused_ordering(707) 00:12:48.060 fused_ordering(708) 00:12:48.060 fused_ordering(709) 00:12:48.060 fused_ordering(710) 00:12:48.060 fused_ordering(711) 00:12:48.060 fused_ordering(712) 00:12:48.060 fused_ordering(713) 00:12:48.060 fused_ordering(714) 00:12:48.060 fused_ordering(715) 00:12:48.060 fused_ordering(716) 00:12:48.060 fused_ordering(717) 00:12:48.060 fused_ordering(718) 00:12:48.060 fused_ordering(719) 00:12:48.060 fused_ordering(720) 00:12:48.060 fused_ordering(721) 00:12:48.060 fused_ordering(722) 00:12:48.060 fused_ordering(723) 00:12:48.060 fused_ordering(724) 00:12:48.060 fused_ordering(725) 00:12:48.060 fused_ordering(726) 00:12:48.060 fused_ordering(727) 00:12:48.060 fused_ordering(728) 00:12:48.060 fused_ordering(729) 00:12:48.060 fused_ordering(730) 00:12:48.060 fused_ordering(731) 00:12:48.060 fused_ordering(732) 00:12:48.060 fused_ordering(733) 00:12:48.060 fused_ordering(734) 00:12:48.060 fused_ordering(735) 00:12:48.060 fused_ordering(736) 00:12:48.060 fused_ordering(737) 00:12:48.060 fused_ordering(738) 00:12:48.060 fused_ordering(739) 00:12:48.060 fused_ordering(740) 00:12:48.060 fused_ordering(741) 00:12:48.060 fused_ordering(742) 00:12:48.060 fused_ordering(743) 00:12:48.060 fused_ordering(744) 00:12:48.060 fused_ordering(745) 00:12:48.060 fused_ordering(746) 00:12:48.060 fused_ordering(747) 00:12:48.060 fused_ordering(748) 00:12:48.060 fused_ordering(749) 00:12:48.060 fused_ordering(750) 00:12:48.060 fused_ordering(751) 00:12:48.060 fused_ordering(752) 00:12:48.060 fused_ordering(753) 00:12:48.060 fused_ordering(754) 00:12:48.060 fused_ordering(755) 00:12:48.060 fused_ordering(756) 00:12:48.060 fused_ordering(757) 00:12:48.060 fused_ordering(758) 00:12:48.060 fused_ordering(759) 00:12:48.060 fused_ordering(760) 00:12:48.060 fused_ordering(761) 00:12:48.060 fused_ordering(762) 00:12:48.060 fused_ordering(763) 00:12:48.060 fused_ordering(764) 00:12:48.060 fused_ordering(765) 00:12:48.060 fused_ordering(766) 00:12:48.060 fused_ordering(767) 00:12:48.060 fused_ordering(768) 00:12:48.060 fused_ordering(769) 00:12:48.060 fused_ordering(770) 00:12:48.060 fused_ordering(771) 00:12:48.060 fused_ordering(772) 00:12:48.060 fused_ordering(773) 00:12:48.060 fused_ordering(774) 00:12:48.060 fused_ordering(775) 00:12:48.060 fused_ordering(776) 00:12:48.060 fused_ordering(777) 00:12:48.060 fused_ordering(778) 00:12:48.060 fused_ordering(779) 00:12:48.060 fused_ordering(780) 00:12:48.060 fused_ordering(781) 00:12:48.060 fused_ordering(782) 00:12:48.060 fused_ordering(783) 00:12:48.060 fused_ordering(784) 00:12:48.060 fused_ordering(785) 00:12:48.060 fused_ordering(786) 00:12:48.060 fused_ordering(787) 00:12:48.060 fused_ordering(788) 00:12:48.060 fused_ordering(789) 00:12:48.060 fused_ordering(790) 00:12:48.060 fused_ordering(791) 00:12:48.060 fused_ordering(792) 00:12:48.060 fused_ordering(793) 00:12:48.060 fused_ordering(794) 00:12:48.060 fused_ordering(795) 00:12:48.060 fused_ordering(796) 00:12:48.060 fused_ordering(797) 00:12:48.060 fused_ordering(798) 00:12:48.060 fused_ordering(799) 00:12:48.060 fused_ordering(800) 00:12:48.060 fused_ordering(801) 00:12:48.060 fused_ordering(802) 00:12:48.060 fused_ordering(803) 00:12:48.060 fused_ordering(804) 00:12:48.060 fused_ordering(805) 00:12:48.060 fused_ordering(806) 00:12:48.060 fused_ordering(807) 00:12:48.060 fused_ordering(808) 00:12:48.060 fused_ordering(809) 00:12:48.060 fused_ordering(810) 00:12:48.060 fused_ordering(811) 00:12:48.060 fused_ordering(812) 00:12:48.060 fused_ordering(813) 00:12:48.060 fused_ordering(814) 00:12:48.060 fused_ordering(815) 00:12:48.060 fused_ordering(816) 00:12:48.060 fused_ordering(817) 00:12:48.060 fused_ordering(818) 00:12:48.060 fused_ordering(819) 00:12:48.060 fused_ordering(820) 00:12:48.629 fused_o[2024-11-20 16:14:19.645891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823f00 is same with the state(6) to be set 00:12:48.629 rdering(821) 00:12:48.629 fused_ordering(822) 00:12:48.629 fused_ordering(823) 00:12:48.629 fused_ordering(824) 00:12:48.629 fused_ordering(825) 00:12:48.629 fused_ordering(826) 00:12:48.629 fused_ordering(827) 00:12:48.629 fused_ordering(828) 00:12:48.629 fused_ordering(829) 00:12:48.629 fused_ordering(830) 00:12:48.629 fused_ordering(831) 00:12:48.629 fused_ordering(832) 00:12:48.629 fused_ordering(833) 00:12:48.629 fused_ordering(834) 00:12:48.629 fused_ordering(835) 00:12:48.629 fused_ordering(836) 00:12:48.629 fused_ordering(837) 00:12:48.629 fused_ordering(838) 00:12:48.629 fused_ordering(839) 00:12:48.629 fused_ordering(840) 00:12:48.629 fused_ordering(841) 00:12:48.629 fused_ordering(842) 00:12:48.629 fused_ordering(843) 00:12:48.629 fused_ordering(844) 00:12:48.629 fused_ordering(845) 00:12:48.629 fused_ordering(846) 00:12:48.629 fused_ordering(847) 00:12:48.629 fused_ordering(848) 00:12:48.629 fused_ordering(849) 00:12:48.629 fused_ordering(850) 00:12:48.629 fused_ordering(851) 00:12:48.629 fused_ordering(852) 00:12:48.629 fused_ordering(853) 00:12:48.629 fused_ordering(854) 00:12:48.629 fused_ordering(855) 00:12:48.629 fused_ordering(856) 00:12:48.629 fused_ordering(857) 00:12:48.629 fused_ordering(858) 00:12:48.629 fused_ordering(859) 00:12:48.629 fused_ordering(860) 00:12:48.629 fused_ordering(861) 00:12:48.629 fused_ordering(862) 00:12:48.629 fused_ordering(863) 00:12:48.629 fused_ordering(864) 00:12:48.629 fused_ordering(865) 00:12:48.629 fused_ordering(866) 00:12:48.629 fused_ordering(867) 00:12:48.629 fused_ordering(868) 00:12:48.629 fused_ordering(869) 00:12:48.629 fused_ordering(870) 00:12:48.629 fused_ordering(871) 00:12:48.629 fused_ordering(872) 00:12:48.629 fused_ordering(873) 00:12:48.629 fused_ordering(874) 00:12:48.629 fused_ordering(875) 00:12:48.629 fused_ordering(876) 00:12:48.629 fused_ordering(877) 00:12:48.629 fused_ordering(878) 00:12:48.629 fused_ordering(879) 00:12:48.629 fused_ordering(880) 00:12:48.629 fused_ordering(881) 00:12:48.629 fused_ordering(882) 00:12:48.629 fused_ordering(883) 00:12:48.629 fused_ordering(884) 00:12:48.629 fused_ordering(885) 00:12:48.629 fused_ordering(886) 00:12:48.629 fused_ordering(887) 00:12:48.629 fused_ordering(888) 00:12:48.629 fused_ordering(889) 00:12:48.629 fused_ordering(890) 00:12:48.629 fused_ordering(891) 00:12:48.629 fused_ordering(892) 00:12:48.629 fused_ordering(893) 00:12:48.629 fused_ordering(894) 00:12:48.629 fused_ordering(895) 00:12:48.629 fused_ordering(896) 00:12:48.629 fused_ordering(897) 00:12:48.629 fused_ordering(898) 00:12:48.629 fused_ordering(899) 00:12:48.629 fused_ordering(900) 00:12:48.629 fused_ordering(901) 00:12:48.629 fused_ordering(902) 00:12:48.629 fused_ordering(903) 00:12:48.629 fused_ordering(904) 00:12:48.629 fused_ordering(905) 00:12:48.629 fused_ordering(906) 00:12:48.629 fused_ordering(907) 00:12:48.629 fused_ordering(908) 00:12:48.629 fused_ordering(909) 00:12:48.629 fused_ordering(910) 00:12:48.629 fused_ordering(911) 00:12:48.629 fused_ordering(912) 00:12:48.629 fused_ordering(913) 00:12:48.629 fused_ordering(914) 00:12:48.629 fused_ordering(915) 00:12:48.629 fused_ordering(916) 00:12:48.629 fused_ordering(917) 00:12:48.629 fused_ordering(918) 00:12:48.629 fused_ordering(919) 00:12:48.629 fused_ordering(920) 00:12:48.629 fused_ordering(921) 00:12:48.629 fused_ordering(922) 00:12:48.629 fused_ordering(923) 00:12:48.629 fused_ordering(924) 00:12:48.629 fused_ordering(925) 00:12:48.629 fused_ordering(926) 00:12:48.629 fused_ordering(927) 00:12:48.629 fused_ordering(928) 00:12:48.629 fused_ordering(929) 00:12:48.629 fused_ordering(930) 00:12:48.629 fused_ordering(931) 00:12:48.629 fused_ordering(932) 00:12:48.629 fused_ordering(933) 00:12:48.629 fused_ordering(934) 00:12:48.629 fused_ordering(935) 00:12:48.629 fused_ordering(936) 00:12:48.629 fused_ordering(937) 00:12:48.629 fused_ordering(938) 00:12:48.629 fused_ordering(939) 00:12:48.629 fused_ordering(940) 00:12:48.629 fused_ordering(941) 00:12:48.629 fused_ordering(942) 00:12:48.629 fused_ordering(943) 00:12:48.629 fused_ordering(944) 00:12:48.629 fused_ordering(945) 00:12:48.629 fused_ordering(946) 00:12:48.629 fused_ordering(947) 00:12:48.629 fused_ordering(948) 00:12:48.629 fused_ordering(949) 00:12:48.629 fused_ordering(950) 00:12:48.629 fused_ordering(951) 00:12:48.629 fused_ordering(952) 00:12:48.629 fused_ordering(953) 00:12:48.629 fused_ordering(954) 00:12:48.629 fused_ordering(955) 00:12:48.629 fused_ordering(956) 00:12:48.629 fused_ordering(957) 00:12:48.629 fused_ordering(958) 00:12:48.629 fused_ordering(959) 00:12:48.629 fused_ordering(960) 00:12:48.629 fused_ordering(961) 00:12:48.629 fused_ordering(962) 00:12:48.629 fused_ordering(963) 00:12:48.629 fused_ordering(964) 00:12:48.629 fused_ordering(965) 00:12:48.629 fused_ordering(966) 00:12:48.629 fused_ordering(967) 00:12:48.629 fused_ordering(968) 00:12:48.629 fused_ordering(969) 00:12:48.629 fused_ordering(970) 00:12:48.629 fused_ordering(971) 00:12:48.629 fused_ordering(972) 00:12:48.629 fused_ordering(973) 00:12:48.629 fused_ordering(974) 00:12:48.629 fused_ordering(975) 00:12:48.629 fused_ordering(976) 00:12:48.629 fused_ordering(977) 00:12:48.629 fused_ordering(978) 00:12:48.629 fused_ordering(979) 00:12:48.629 fused_ordering(980) 00:12:48.629 fused_ordering(981) 00:12:48.629 fused_ordering(982) 00:12:48.629 fused_ordering(983) 00:12:48.629 fused_ordering(984) 00:12:48.629 fused_ordering(985) 00:12:48.629 fused_ordering(986) 00:12:48.629 fused_ordering(987) 00:12:48.629 fused_ordering(988) 00:12:48.629 fused_ordering(989) 00:12:48.629 fused_ordering(990) 00:12:48.629 fused_ordering(991) 00:12:48.629 fused_ordering(992) 00:12:48.629 fused_ordering(993) 00:12:48.629 fused_ordering(994) 00:12:48.629 fused_ordering(995) 00:12:48.629 fused_ordering(996) 00:12:48.629 fused_ordering(997) 00:12:48.629 fused_ordering(998) 00:12:48.629 fused_ordering(999) 00:12:48.629 fused_ordering(1000) 00:12:48.629 fused_ordering(1001) 00:12:48.629 fused_ordering(1002) 00:12:48.629 fused_ordering(1003) 00:12:48.629 fused_ordering(1004) 00:12:48.629 fused_ordering(1005) 00:12:48.629 fused_ordering(1006) 00:12:48.629 fused_ordering(1007) 00:12:48.629 fused_ordering(1008) 00:12:48.629 fused_ordering(1009) 00:12:48.629 fused_ordering(1010) 00:12:48.629 fused_ordering(1011) 00:12:48.629 fused_ordering(1012) 00:12:48.629 fused_ordering(1013) 00:12:48.629 fused_ordering(1014) 00:12:48.629 fused_ordering(1015) 00:12:48.629 fused_ordering(1016) 00:12:48.629 fused_ordering(1017) 00:12:48.629 fused_ordering(1018) 00:12:48.629 fused_ordering(1019) 00:12:48.629 fused_ordering(1020) 00:12:48.629 fused_ordering(1021) 00:12:48.629 fused_ordering(1022) 00:12:48.629 fused_ordering(1023) 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.629 rmmod nvme_tcp 00:12:48.629 rmmod nvme_fabrics 00:12:48.629 rmmod nvme_keyring 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1870515 ']' 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1870515 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1870515 ']' 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1870515 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1870515 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:48.629 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:48.630 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1870515' 00:12:48.630 killing process with pid 1870515 00:12:48.630 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1870515 00:12:48.630 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1870515 00:12:48.889 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:48.889 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:48.889 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:48.889 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:48.889 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:48.889 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:48.889 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:48.889 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.889 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:48.889 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.889 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.889 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:51.427 00:12:51.427 real 0m10.714s 00:12:51.427 user 0m4.922s 00:12:51.427 sys 0m5.884s 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.427 ************************************ 00:12:51.427 END TEST nvmf_fused_ordering 00:12:51.427 ************************************ 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.427 ************************************ 00:12:51.427 START TEST nvmf_ns_masking 00:12:51.427 ************************************ 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:51.427 * Looking for test storage... 00:12:51.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:51.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.427 --rc genhtml_branch_coverage=1 00:12:51.427 --rc genhtml_function_coverage=1 00:12:51.427 --rc genhtml_legend=1 00:12:51.427 --rc geninfo_all_blocks=1 00:12:51.427 --rc geninfo_unexecuted_blocks=1 00:12:51.427 00:12:51.427 ' 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:51.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.427 --rc genhtml_branch_coverage=1 00:12:51.427 --rc genhtml_function_coverage=1 00:12:51.427 --rc genhtml_legend=1 00:12:51.427 --rc geninfo_all_blocks=1 00:12:51.427 --rc geninfo_unexecuted_blocks=1 00:12:51.427 00:12:51.427 ' 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:51.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.427 --rc genhtml_branch_coverage=1 00:12:51.427 --rc genhtml_function_coverage=1 00:12:51.427 --rc genhtml_legend=1 00:12:51.427 --rc geninfo_all_blocks=1 00:12:51.427 --rc geninfo_unexecuted_blocks=1 00:12:51.427 00:12:51.427 ' 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:51.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.427 --rc genhtml_branch_coverage=1 00:12:51.427 --rc genhtml_function_coverage=1 00:12:51.427 --rc genhtml_legend=1 00:12:51.427 --rc geninfo_all_blocks=1 00:12:51.427 --rc geninfo_unexecuted_blocks=1 00:12:51.427 00:12:51.427 ' 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.427 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:51.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=398a6de9-5038-40db-875d-0c567439b295 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=b2ac185c-1e94-4fad-8c21-b30c5769d76a 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5fa0b03a-0e94-4533-9726-ec06a59cce92 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:51.428 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.996 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:57.997 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:57.997 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:57.997 Found net devices under 0000:86:00.0: cvl_0_0 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:57.997 Found net devices under 0000:86:00.1: cvl_0_1 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.997 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:57.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:12:57.997 00:12:57.997 --- 10.0.0.2 ping statistics --- 00:12:57.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.997 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:57.997 00:12:57.997 --- 10.0.0.1 ping statistics --- 00:12:57.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.997 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1874328 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1874328 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:57.997 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1874328 ']' 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:57.998 [2024-11-20 16:14:28.343440] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:12:57.998 [2024-11-20 16:14:28.343492] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.998 [2024-11-20 16:14:28.422757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.998 [2024-11-20 16:14:28.463322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.998 [2024-11-20 16:14:28.463355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.998 [2024-11-20 16:14:28.463362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.998 [2024-11-20 16:14:28.463368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.998 [2024-11-20 16:14:28.463373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.998 [2024-11-20 16:14:28.463923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:57.998 [2024-11-20 16:14:28.765555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:57.998 Malloc1 00:12:57.998 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:57.998 Malloc2 00:12:57.998 16:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:58.257 16:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:58.516 16:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.775 [2024-11-20 16:14:29.772186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.775 16:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:58.775 16:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5fa0b03a-0e94-4533-9726-ec06a59cce92 -a 10.0.0.2 -s 4420 -i 4 00:12:59.035 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.035 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:59.035 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.035 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:59.035 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:00.940 [ 0]:0x1 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:00.940 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.200 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9432345ace2445e5b77b3ae7ab555bfe 00:13:01.200 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9432345ace2445e5b77b3ae7ab555bfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.200 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:01.200 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:01.200 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.200 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.200 [ 0]:0x1 00:13:01.200 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.200 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.458 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9432345ace2445e5b77b3ae7ab555bfe 00:13:01.458 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9432345ace2445e5b77b3ae7ab555bfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.458 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:01.458 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.458 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.458 [ 1]:0x2 00:13:01.458 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.458 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.458 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fa6d361666604d269fcb222a04a2c436 00:13:01.458 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fa6d361666604d269fcb222a04a2c436 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.458 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:01.458 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.458 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.717 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:01.976 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:01.976 16:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5fa0b03a-0e94-4533-9726-ec06a59cce92 -a 10.0.0.2 -s 4420 -i 4 00:13:01.976 16:14:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:01.976 16:14:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:01.976 16:14:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.976 16:14:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:01.976 16:14:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:01.976 16:14:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:03.878 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:03.878 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:03.878 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.878 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:03.878 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.878 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:03.878 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:03.878 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:04.137 [ 0]:0x2 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fa6d361666604d269fcb222a04a2c436 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fa6d361666604d269fcb222a04a2c436 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.137 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:04.397 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:04.397 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.397 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:04.397 [ 0]:0x1 00:13:04.397 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:04.397 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.397 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9432345ace2445e5b77b3ae7ab555bfe 00:13:04.397 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9432345ace2445e5b77b3ae7ab555bfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.397 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:04.397 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.397 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:04.397 [ 1]:0x2 00:13:04.397 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:04.397 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.656 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fa6d361666604d269fcb222a04a2c436 00:13:04.656 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fa6d361666604d269fcb222a04a2c436 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.656 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:04.656 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:04.656 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:04.656 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:04.656 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:04.656 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.656 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:04.656 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.656 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:04.656 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:04.656 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.915 [ 0]:0x2 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fa6d361666604d269fcb222a04a2c436 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fa6d361666604d269fcb222a04a2c436 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:04.915 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.915 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:05.175 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:05.175 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5fa0b03a-0e94-4533-9726-ec06a59cce92 -a 10.0.0.2 -s 4420 -i 4 00:13:05.175 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:05.175 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:05.175 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.175 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:05.175 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:05.175 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:07.709 [ 0]:0x1 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9432345ace2445e5b77b3ae7ab555bfe 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9432345ace2445e5b77b3ae7ab555bfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:07.709 [ 1]:0x2 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fa6d361666604d269fcb222a04a2c436 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fa6d361666604d269fcb222a04a2c436 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:07.709 [ 0]:0x2 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fa6d361666604d269fcb222a04a2c436 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fa6d361666604d269fcb222a04a2c436 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.709 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.968 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.968 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.968 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.968 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.968 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:07.968 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:07.968 [2024-11-20 16:14:39.106598] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:07.968 request: 00:13:07.968 { 00:13:07.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.968 "nsid": 2, 00:13:07.968 "host": "nqn.2016-06.io.spdk:host1", 00:13:07.968 "method": "nvmf_ns_remove_host", 00:13:07.968 "req_id": 1 00:13:07.968 } 00:13:07.968 Got JSON-RPC error response 00:13:07.968 response: 00:13:07.968 { 00:13:07.968 "code": -32602, 00:13:07.968 "message": "Invalid parameters" 00:13:07.968 } 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:07.968 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:08.226 [ 0]:0x2 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fa6d361666604d269fcb222a04a2c436 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fa6d361666604d269fcb222a04a2c436 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1876311 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1876311 /var/tmp/host.sock 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1876311 ']' 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:08.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.226 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:08.226 [2024-11-20 16:14:39.338002] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:13:08.226 [2024-11-20 16:14:39.338046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876311 ] 00:13:08.226 [2024-11-20 16:14:39.411213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.226 [2024-11-20 16:14:39.451538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.485 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.485 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:08.485 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.743 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:09.001 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 398a6de9-5038-40db-875d-0c567439b295 00:13:09.001 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:09.001 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 398A6DE9503840DB875D0C567439B295 -i 00:13:09.259 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid b2ac185c-1e94-4fad-8c21-b30c5769d76a 00:13:09.259 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:09.259 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g B2AC185C1E944FAD8C21B30C5769D76A -i 00:13:09.259 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:09.571 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:09.829 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:09.829 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:10.088 nvme0n1 00:13:10.088 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:10.088 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:10.347 nvme1n2 00:13:10.607 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:10.607 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:10.607 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:10.607 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:10.607 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:10.607 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:10.607 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:10.607 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:10.607 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:10.865 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 398a6de9-5038-40db-875d-0c567439b295 == \3\9\8\a\6\d\e\9\-\5\0\3\8\-\4\0\d\b\-\8\7\5\d\-\0\c\5\6\7\4\3\9\b\2\9\5 ]] 00:13:10.866 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:10.866 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:10.866 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:11.124 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ b2ac185c-1e94-4fad-8c21-b30c5769d76a == \b\2\a\c\1\8\5\c\-\1\e\9\4\-\4\f\a\d\-\8\c\2\1\-\b\3\0\c\5\7\6\9\d\7\6\a ]] 00:13:11.124 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 398a6de9-5038-40db-875d-0c567439b295 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 398A6DE9503840DB875D0C567439B295 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 398A6DE9503840DB875D0C567439B295 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:11.383 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 398A6DE9503840DB875D0C567439B295 00:13:11.642 [2024-11-20 16:14:42.764867] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:11.642 [2024-11-20 16:14:42.764895] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:11.642 [2024-11-20 16:14:42.764903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.642 request: 00:13:11.642 { 00:13:11.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.642 "namespace": { 00:13:11.642 "bdev_name": "invalid", 00:13:11.642 "nsid": 1, 00:13:11.642 "nguid": "398A6DE9503840DB875D0C567439B295", 00:13:11.642 "no_auto_visible": false 00:13:11.642 }, 00:13:11.642 "method": "nvmf_subsystem_add_ns", 00:13:11.642 "req_id": 1 00:13:11.642 } 00:13:11.642 Got JSON-RPC error response 00:13:11.642 response: 00:13:11.642 { 00:13:11.642 "code": -32602, 00:13:11.642 "message": "Invalid parameters" 00:13:11.642 } 00:13:11.642 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:11.642 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:11.642 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:11.642 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:11.642 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 398a6de9-5038-40db-875d-0c567439b295 00:13:11.642 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:11.642 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 398A6DE9503840DB875D0C567439B295 -i 00:13:11.901 16:14:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:13.804 16:14:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:13.804 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:13.804 16:14:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:14.062 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:14.062 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1876311 00:13:14.062 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1876311 ']' 00:13:14.062 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1876311 00:13:14.062 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:14.062 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.062 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1876311 00:13:14.062 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:14.062 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:14.062 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1876311' 00:13:14.062 killing process with pid 1876311 00:13:14.062 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1876311 00:13:14.062 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1876311 00:13:14.630 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.630 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:14.630 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:14.630 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:14.630 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:14.630 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:14.630 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:14.631 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:14.631 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:14.631 rmmod nvme_tcp 00:13:14.631 rmmod nvme_fabrics 00:13:14.631 rmmod nvme_keyring 00:13:14.631 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:14.631 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:14.631 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:14.631 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1874328 ']' 00:13:14.631 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1874328 00:13:14.631 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1874328 ']' 00:13:14.631 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1874328 00:13:14.631 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:14.631 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.631 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1874328 00:13:14.890 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.890 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.890 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1874328' 00:13:14.890 killing process with pid 1874328 00:13:14.890 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1874328 00:13:14.890 16:14:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1874328 00:13:14.890 16:14:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:14.890 16:14:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:14.890 16:14:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:14.890 16:14:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:14.890 16:14:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:14.890 16:14:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:14.890 16:14:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:14.890 16:14:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:14.890 16:14:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:14.890 16:14:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.890 16:14:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.890 16:14:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:17.426 00:13:17.426 real 0m26.033s 00:13:17.426 user 0m31.146s 00:13:17.426 sys 0m7.060s 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:17.426 ************************************ 00:13:17.426 END TEST nvmf_ns_masking 00:13:17.426 ************************************ 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:17.426 ************************************ 00:13:17.426 START TEST nvmf_nvme_cli 00:13:17.426 ************************************ 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:17.426 * Looking for test storage... 00:13:17.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.426 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:17.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.427 --rc genhtml_branch_coverage=1 00:13:17.427 --rc genhtml_function_coverage=1 00:13:17.427 --rc genhtml_legend=1 00:13:17.427 --rc geninfo_all_blocks=1 00:13:17.427 --rc geninfo_unexecuted_blocks=1 00:13:17.427 00:13:17.427 ' 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:17.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.427 --rc genhtml_branch_coverage=1 00:13:17.427 --rc genhtml_function_coverage=1 00:13:17.427 --rc genhtml_legend=1 00:13:17.427 --rc geninfo_all_blocks=1 00:13:17.427 --rc geninfo_unexecuted_blocks=1 00:13:17.427 00:13:17.427 ' 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:17.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.427 --rc genhtml_branch_coverage=1 00:13:17.427 --rc genhtml_function_coverage=1 00:13:17.427 --rc genhtml_legend=1 00:13:17.427 --rc geninfo_all_blocks=1 00:13:17.427 --rc geninfo_unexecuted_blocks=1 00:13:17.427 00:13:17.427 ' 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:17.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.427 --rc genhtml_branch_coverage=1 00:13:17.427 --rc genhtml_function_coverage=1 00:13:17.427 --rc genhtml_legend=1 00:13:17.427 --rc geninfo_all_blocks=1 00:13:17.427 --rc geninfo_unexecuted_blocks=1 00:13:17.427 00:13:17.427 ' 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:17.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:17.427 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:17.428 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:17.428 16:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:23.997 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:23.998 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:23.998 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:23.998 Found net devices under 0000:86:00.0: cvl_0_0 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:23.998 Found net devices under 0000:86:00.1: cvl_0_1 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:23.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:13:23.998 00:13:23.998 --- 10.0.0.2 ping statistics --- 00:13:23.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.998 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:13:23.998 00:13:23.998 --- 10.0.0.1 ping statistics --- 00:13:23.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.998 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1881022 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1881022 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1881022 ']' 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.998 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.998 [2024-11-20 16:14:54.462080] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:13:23.998 [2024-11-20 16:14:54.462125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.998 [2024-11-20 16:14:54.541305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.998 [2024-11-20 16:14:54.584235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.998 [2024-11-20 16:14:54.584272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.998 [2024-11-20 16:14:54.584279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.998 [2024-11-20 16:14:54.584285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.999 [2024-11-20 16:14:54.584289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.999 [2024-11-20 16:14:54.585805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.999 [2024-11-20 16:14:54.585918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.999 [2024-11-20 16:14:54.586026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.999 [2024-11-20 16:14:54.586027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.999 [2024-11-20 16:14:54.723125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.999 Malloc0 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.999 Malloc1 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.999 [2024-11-20 16:14:54.816969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.999 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:23.999 00:13:23.999 Discovery Log Number of Records 2, Generation counter 2 00:13:23.999 =====Discovery Log Entry 0====== 00:13:23.999 trtype: tcp 00:13:23.999 adrfam: ipv4 00:13:23.999 subtype: current discovery subsystem 00:13:23.999 treq: not required 00:13:23.999 portid: 0 00:13:23.999 trsvcid: 4420 00:13:23.999 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:23.999 traddr: 10.0.0.2 00:13:23.999 eflags: explicit discovery connections, duplicate discovery information 00:13:23.999 sectype: none 00:13:23.999 =====Discovery Log Entry 1====== 00:13:23.999 trtype: tcp 00:13:23.999 adrfam: ipv4 00:13:23.999 subtype: nvme subsystem 00:13:23.999 treq: not required 00:13:23.999 portid: 0 00:13:23.999 trsvcid: 4420 00:13:23.999 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:23.999 traddr: 10.0.0.2 00:13:23.999 eflags: none 00:13:23.999 sectype: none 00:13:23.999 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:23.999 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:23.999 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:23.999 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:23.999 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:23.999 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:23.999 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:23.999 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:23.999 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:23.999 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:23.999 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.931 16:14:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:24.931 16:14:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:24.931 16:14:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.931 16:14:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:24.931 16:14:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:24.931 16:14:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:27.457 /dev/nvme0n2 ]] 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:27.457 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:27.458 rmmod nvme_tcp 00:13:27.458 rmmod nvme_fabrics 00:13:27.458 rmmod nvme_keyring 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1881022 ']' 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1881022 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1881022 ']' 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1881022 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1881022 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1881022' 00:13:27.458 killing process with pid 1881022 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1881022 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1881022 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.458 16:14:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.992 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:29.992 00:13:29.992 real 0m12.497s 00:13:29.992 user 0m17.920s 00:13:29.992 sys 0m5.041s 00:13:29.992 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.992 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.992 ************************************ 00:13:29.992 END TEST nvmf_nvme_cli 00:13:29.992 ************************************ 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.993 ************************************ 00:13:29.993 START TEST nvmf_vfio_user 00:13:29.993 ************************************ 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:29.993 * Looking for test storage... 00:13:29.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:29.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.993 --rc genhtml_branch_coverage=1 00:13:29.993 --rc genhtml_function_coverage=1 00:13:29.993 --rc genhtml_legend=1 00:13:29.993 --rc geninfo_all_blocks=1 00:13:29.993 --rc geninfo_unexecuted_blocks=1 00:13:29.993 00:13:29.993 ' 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:29.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.993 --rc genhtml_branch_coverage=1 00:13:29.993 --rc genhtml_function_coverage=1 00:13:29.993 --rc genhtml_legend=1 00:13:29.993 --rc geninfo_all_blocks=1 00:13:29.993 --rc geninfo_unexecuted_blocks=1 00:13:29.993 00:13:29.993 ' 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:29.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.993 --rc genhtml_branch_coverage=1 00:13:29.993 --rc genhtml_function_coverage=1 00:13:29.993 --rc genhtml_legend=1 00:13:29.993 --rc geninfo_all_blocks=1 00:13:29.993 --rc geninfo_unexecuted_blocks=1 00:13:29.993 00:13:29.993 ' 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:29.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.993 --rc genhtml_branch_coverage=1 00:13:29.993 --rc genhtml_function_coverage=1 00:13:29.993 --rc genhtml_legend=1 00:13:29.993 --rc geninfo_all_blocks=1 00:13:29.993 --rc geninfo_unexecuted_blocks=1 00:13:29.993 00:13:29.993 ' 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:29.993 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1882268 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1882268' 00:13:29.994 Process pid: 1882268 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1882268 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1882268 ']' 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.994 16:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:29.994 [2024-11-20 16:15:01.041070] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:13:29.994 [2024-11-20 16:15:01.041121] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.994 [2024-11-20 16:15:01.113777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.994 [2024-11-20 16:15:01.155875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.994 [2024-11-20 16:15:01.155911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.994 [2024-11-20 16:15:01.155918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.994 [2024-11-20 16:15:01.155924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.994 [2024-11-20 16:15:01.155930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.994 [2024-11-20 16:15:01.157541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.994 [2024-11-20 16:15:01.157639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.994 [2024-11-20 16:15:01.157744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.994 [2024-11-20 16:15:01.157746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.310 16:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.310 16:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:30.310 16:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:31.279 16:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:31.279 16:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:31.279 16:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:31.279 16:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:31.279 16:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:31.279 16:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:31.536 Malloc1 00:13:31.536 16:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:31.793 16:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:32.050 16:15:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:32.307 16:15:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:32.307 16:15:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:32.307 16:15:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:32.307 Malloc2 00:13:32.307 16:15:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:32.564 16:15:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:32.821 16:15:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:33.080 16:15:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:33.080 16:15:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:33.080 16:15:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:33.080 16:15:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:33.080 16:15:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:33.080 16:15:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:33.080 [2024-11-20 16:15:04.155051] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:13:33.080 [2024-11-20 16:15:04.155080] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882928 ] 00:13:33.080 [2024-11-20 16:15:04.195240] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:33.080 [2024-11-20 16:15:04.203554] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:33.080 [2024-11-20 16:15:04.203576] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa5a7435000 00:13:33.080 [2024-11-20 16:15:04.204552] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.080 [2024-11-20 16:15:04.205551] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.080 [2024-11-20 16:15:04.206554] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.080 [2024-11-20 16:15:04.207558] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:33.080 [2024-11-20 16:15:04.208565] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:33.080 [2024-11-20 16:15:04.209568] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.080 [2024-11-20 16:15:04.210578] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:33.080 [2024-11-20 16:15:04.211584] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.080 [2024-11-20 16:15:04.212592] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:33.080 [2024-11-20 16:15:04.212601] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa5a742a000 00:13:33.081 [2024-11-20 16:15:04.213520] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:33.081 [2024-11-20 16:15:04.227465] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:33.081 [2024-11-20 16:15:04.227495] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:33.081 [2024-11-20 16:15:04.229713] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:33.081 [2024-11-20 16:15:04.229750] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:33.081 [2024-11-20 16:15:04.229817] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:33.081 [2024-11-20 16:15:04.229831] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:33.081 [2024-11-20 16:15:04.229837] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:33.081 [2024-11-20 16:15:04.230713] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:33.081 [2024-11-20 16:15:04.230722] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:33.081 [2024-11-20 16:15:04.230728] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:33.081 [2024-11-20 16:15:04.231717] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:33.081 [2024-11-20 16:15:04.231727] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:33.081 [2024-11-20 16:15:04.231735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:33.081 [2024-11-20 16:15:04.232722] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:33.081 [2024-11-20 16:15:04.232730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:33.081 [2024-11-20 16:15:04.233729] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:33.081 [2024-11-20 16:15:04.233737] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:33.081 [2024-11-20 16:15:04.233744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:33.081 [2024-11-20 16:15:04.233749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:33.081 [2024-11-20 16:15:04.233857] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:33.081 [2024-11-20 16:15:04.233861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:33.081 [2024-11-20 16:15:04.233865] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:33.081 [2024-11-20 16:15:04.234739] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:33.081 [2024-11-20 16:15:04.235739] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:33.081 [2024-11-20 16:15:04.236748] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:33.081 [2024-11-20 16:15:04.237744] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.081 [2024-11-20 16:15:04.237807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:33.081 [2024-11-20 16:15:04.238752] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:33.081 [2024-11-20 16:15:04.238759] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:33.081 [2024-11-20 16:15:04.238763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:33.081 [2024-11-20 16:15:04.238780] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:33.081 [2024-11-20 16:15:04.238787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:33.081 [2024-11-20 16:15:04.238801] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:33.081 [2024-11-20 16:15:04.238806] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:33.081 [2024-11-20 16:15:04.238809] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.081 [2024-11-20 16:15:04.238822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:33.081 [2024-11-20 16:15:04.238863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:33.081 [2024-11-20 16:15:04.238872] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:33.081 [2024-11-20 16:15:04.238876] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:33.081 [2024-11-20 16:15:04.238880] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:33.081 [2024-11-20 16:15:04.238884] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:33.081 [2024-11-20 16:15:04.238890] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:33.081 [2024-11-20 16:15:04.238896] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:33.081 [2024-11-20 16:15:04.238901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:33.081 [2024-11-20 16:15:04.238909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:33.081 [2024-11-20 16:15:04.238919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:33.081 [2024-11-20 16:15:04.238929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:33.081 [2024-11-20 16:15:04.238938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.081 [2024-11-20 16:15:04.238946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.081 [2024-11-20 16:15:04.238953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.081 [2024-11-20 16:15:04.238960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.081 [2024-11-20 16:15:04.238964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:33.081 [2024-11-20 16:15:04.238970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:33.081 [2024-11-20 16:15:04.238978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:33.081 [2024-11-20 16:15:04.238989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:33.081 [2024-11-20 16:15:04.238996] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:33.081 [2024-11-20 16:15:04.239001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:33.081 [2024-11-20 16:15:04.239007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:33.081 [2024-11-20 16:15:04.239012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:33.081 [2024-11-20 16:15:04.239019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:33.081 [2024-11-20 16:15:04.239030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:33.081 [2024-11-20 16:15:04.239080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:33.081 [2024-11-20 16:15:04.239087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:33.081 [2024-11-20 16:15:04.239093] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:33.082 [2024-11-20 16:15:04.239097] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:33.082 [2024-11-20 16:15:04.239100] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.082 [2024-11-20 16:15:04.239106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:33.082 [2024-11-20 16:15:04.239117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:33.082 [2024-11-20 16:15:04.239126] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:33.082 [2024-11-20 16:15:04.239137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:33.082 [2024-11-20 16:15:04.239144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:33.082 [2024-11-20 16:15:04.239150] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:33.082 [2024-11-20 16:15:04.239154] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:33.082 [2024-11-20 16:15:04.239157] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.082 [2024-11-20 16:15:04.239162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:33.082 [2024-11-20 16:15:04.239182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:33.082 [2024-11-20 16:15:04.239192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:33.082 [2024-11-20 16:15:04.239199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:33.082 [2024-11-20 16:15:04.239210] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:33.082 [2024-11-20 16:15:04.239214] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:33.082 [2024-11-20 16:15:04.239217] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.082 [2024-11-20 16:15:04.239223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:33.082 [2024-11-20 16:15:04.239232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:33.082 [2024-11-20 16:15:04.239239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:33.082 [2024-11-20 16:15:04.239245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:33.082 [2024-11-20 16:15:04.239252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:33.082 [2024-11-20 16:15:04.239257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:33.082 [2024-11-20 16:15:04.239262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:33.082 [2024-11-20 16:15:04.239266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:33.082 [2024-11-20 16:15:04.239270] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:33.082 [2024-11-20 16:15:04.239274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:33.082 [2024-11-20 16:15:04.239279] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:33.082 [2024-11-20 16:15:04.239296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:33.082 [2024-11-20 16:15:04.239304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:33.082 [2024-11-20 16:15:04.239315] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:33.082 [2024-11-20 16:15:04.239321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:33.082 [2024-11-20 16:15:04.239331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:33.082 [2024-11-20 16:15:04.239343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:33.082 [2024-11-20 16:15:04.239353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:33.082 [2024-11-20 16:15:04.239363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:33.082 [2024-11-20 16:15:04.239374] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:33.082 [2024-11-20 16:15:04.239378] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:33.082 [2024-11-20 16:15:04.239382] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:33.082 [2024-11-20 16:15:04.239385] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:33.082 [2024-11-20 16:15:04.239387] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:33.082 [2024-11-20 16:15:04.239393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:33.082 [2024-11-20 16:15:04.239400] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:33.082 [2024-11-20 16:15:04.239404] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:33.082 [2024-11-20 16:15:04.239407] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.082 [2024-11-20 16:15:04.239412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:33.082 [2024-11-20 16:15:04.239418] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:33.082 [2024-11-20 16:15:04.239422] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:33.082 [2024-11-20 16:15:04.239425] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.082 [2024-11-20 16:15:04.239430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:33.082 [2024-11-20 16:15:04.239437] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:33.082 [2024-11-20 16:15:04.239441] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:33.082 [2024-11-20 16:15:04.239444] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.082 [2024-11-20 16:15:04.239449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:33.082 [2024-11-20 16:15:04.239455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:33.082 [2024-11-20 16:15:04.239466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:33.082 [2024-11-20 16:15:04.239477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:33.082 [2024-11-20 16:15:04.239484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:33.082 ===================================================== 00:13:33.082 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:33.082 ===================================================== 00:13:33.082 Controller Capabilities/Features 00:13:33.082 ================================ 00:13:33.082 Vendor ID: 4e58 00:13:33.082 Subsystem Vendor ID: 4e58 00:13:33.082 Serial Number: SPDK1 00:13:33.082 Model Number: SPDK bdev Controller 00:13:33.082 Firmware Version: 25.01 00:13:33.082 Recommended Arb Burst: 6 00:13:33.082 IEEE OUI Identifier: 8d 6b 50 00:13:33.082 Multi-path I/O 00:13:33.082 May have multiple subsystem ports: Yes 00:13:33.082 May have multiple controllers: Yes 00:13:33.082 Associated with SR-IOV VF: No 00:13:33.082 Max Data Transfer Size: 131072 00:13:33.082 Max Number of Namespaces: 32 00:13:33.082 Max Number of I/O Queues: 127 00:13:33.082 NVMe Specification Version (VS): 1.3 00:13:33.082 NVMe Specification Version (Identify): 1.3 00:13:33.082 Maximum Queue Entries: 256 00:13:33.082 Contiguous Queues Required: Yes 00:13:33.082 Arbitration Mechanisms Supported 00:13:33.082 Weighted Round Robin: Not Supported 00:13:33.082 Vendor Specific: Not Supported 00:13:33.082 Reset Timeout: 15000 ms 00:13:33.082 Doorbell Stride: 4 bytes 00:13:33.082 NVM Subsystem Reset: Not Supported 00:13:33.082 Command Sets Supported 00:13:33.082 NVM Command Set: Supported 00:13:33.082 Boot Partition: Not Supported 00:13:33.082 Memory Page Size Minimum: 4096 bytes 00:13:33.083 Memory Page Size Maximum: 4096 bytes 00:13:33.083 Persistent Memory Region: Not Supported 00:13:33.083 Optional Asynchronous Events Supported 00:13:33.083 Namespace Attribute Notices: Supported 00:13:33.083 Firmware Activation Notices: Not Supported 00:13:33.083 ANA Change Notices: Not Supported 00:13:33.083 PLE Aggregate Log Change Notices: Not Supported 00:13:33.083 LBA Status Info Alert Notices: Not Supported 00:13:33.083 EGE Aggregate Log Change Notices: Not Supported 00:13:33.083 Normal NVM Subsystem Shutdown event: Not Supported 00:13:33.083 Zone Descriptor Change Notices: Not Supported 00:13:33.083 Discovery Log Change Notices: Not Supported 00:13:33.083 Controller Attributes 00:13:33.083 128-bit Host Identifier: Supported 00:13:33.083 Non-Operational Permissive Mode: Not Supported 00:13:33.083 NVM Sets: Not Supported 00:13:33.083 Read Recovery Levels: Not Supported 00:13:33.083 Endurance Groups: Not Supported 00:13:33.083 Predictable Latency Mode: Not Supported 00:13:33.083 Traffic Based Keep ALive: Not Supported 00:13:33.083 Namespace Granularity: Not Supported 00:13:33.083 SQ Associations: Not Supported 00:13:33.083 UUID List: Not Supported 00:13:33.083 Multi-Domain Subsystem: Not Supported 00:13:33.083 Fixed Capacity Management: Not Supported 00:13:33.083 Variable Capacity Management: Not Supported 00:13:33.083 Delete Endurance Group: Not Supported 00:13:33.083 Delete NVM Set: Not Supported 00:13:33.083 Extended LBA Formats Supported: Not Supported 00:13:33.083 Flexible Data Placement Supported: Not Supported 00:13:33.083 00:13:33.083 Controller Memory Buffer Support 00:13:33.083 ================================ 00:13:33.083 Supported: No 00:13:33.083 00:13:33.083 Persistent Memory Region Support 00:13:33.083 ================================ 00:13:33.083 Supported: No 00:13:33.083 00:13:33.083 Admin Command Set Attributes 00:13:33.083 ============================ 00:13:33.083 Security Send/Receive: Not Supported 00:13:33.083 Format NVM: Not Supported 00:13:33.083 Firmware Activate/Download: Not Supported 00:13:33.083 Namespace Management: Not Supported 00:13:33.083 Device Self-Test: Not Supported 00:13:33.083 Directives: Not Supported 00:13:33.083 NVMe-MI: Not Supported 00:13:33.083 Virtualization Management: Not Supported 00:13:33.083 Doorbell Buffer Config: Not Supported 00:13:33.083 Get LBA Status Capability: Not Supported 00:13:33.083 Command & Feature Lockdown Capability: Not Supported 00:13:33.083 Abort Command Limit: 4 00:13:33.083 Async Event Request Limit: 4 00:13:33.083 Number of Firmware Slots: N/A 00:13:33.083 Firmware Slot 1 Read-Only: N/A 00:13:33.083 Firmware Activation Without Reset: N/A 00:13:33.083 Multiple Update Detection Support: N/A 00:13:33.083 Firmware Update Granularity: No Information Provided 00:13:33.083 Per-Namespace SMART Log: No 00:13:33.083 Asymmetric Namespace Access Log Page: Not Supported 00:13:33.083 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:33.083 Command Effects Log Page: Supported 00:13:33.083 Get Log Page Extended Data: Supported 00:13:33.083 Telemetry Log Pages: Not Supported 00:13:33.083 Persistent Event Log Pages: Not Supported 00:13:33.083 Supported Log Pages Log Page: May Support 00:13:33.083 Commands Supported & Effects Log Page: Not Supported 00:13:33.083 Feature Identifiers & Effects Log Page:May Support 00:13:33.083 NVMe-MI Commands & Effects Log Page: May Support 00:13:33.083 Data Area 4 for Telemetry Log: Not Supported 00:13:33.083 Error Log Page Entries Supported: 128 00:13:33.083 Keep Alive: Supported 00:13:33.083 Keep Alive Granularity: 10000 ms 00:13:33.083 00:13:33.083 NVM Command Set Attributes 00:13:33.083 ========================== 00:13:33.083 Submission Queue Entry Size 00:13:33.083 Max: 64 00:13:33.083 Min: 64 00:13:33.083 Completion Queue Entry Size 00:13:33.083 Max: 16 00:13:33.083 Min: 16 00:13:33.083 Number of Namespaces: 32 00:13:33.083 Compare Command: Supported 00:13:33.083 Write Uncorrectable Command: Not Supported 00:13:33.083 Dataset Management Command: Supported 00:13:33.083 Write Zeroes Command: Supported 00:13:33.083 Set Features Save Field: Not Supported 00:13:33.083 Reservations: Not Supported 00:13:33.083 Timestamp: Not Supported 00:13:33.083 Copy: Supported 00:13:33.083 Volatile Write Cache: Present 00:13:33.083 Atomic Write Unit (Normal): 1 00:13:33.083 Atomic Write Unit (PFail): 1 00:13:33.083 Atomic Compare & Write Unit: 1 00:13:33.083 Fused Compare & Write: Supported 00:13:33.083 Scatter-Gather List 00:13:33.083 SGL Command Set: Supported (Dword aligned) 00:13:33.083 SGL Keyed: Not Supported 00:13:33.083 SGL Bit Bucket Descriptor: Not Supported 00:13:33.083 SGL Metadata Pointer: Not Supported 00:13:33.083 Oversized SGL: Not Supported 00:13:33.083 SGL Metadata Address: Not Supported 00:13:33.083 SGL Offset: Not Supported 00:13:33.083 Transport SGL Data Block: Not Supported 00:13:33.083 Replay Protected Memory Block: Not Supported 00:13:33.083 00:13:33.083 Firmware Slot Information 00:13:33.083 ========================= 00:13:33.083 Active slot: 1 00:13:33.083 Slot 1 Firmware Revision: 25.01 00:13:33.083 00:13:33.083 00:13:33.083 Commands Supported and Effects 00:13:33.083 ============================== 00:13:33.083 Admin Commands 00:13:33.083 -------------- 00:13:33.083 Get Log Page (02h): Supported 00:13:33.083 Identify (06h): Supported 00:13:33.083 Abort (08h): Supported 00:13:33.083 Set Features (09h): Supported 00:13:33.083 Get Features (0Ah): Supported 00:13:33.083 Asynchronous Event Request (0Ch): Supported 00:13:33.083 Keep Alive (18h): Supported 00:13:33.083 I/O Commands 00:13:33.083 ------------ 00:13:33.083 Flush (00h): Supported LBA-Change 00:13:33.083 Write (01h): Supported LBA-Change 00:13:33.083 Read (02h): Supported 00:13:33.083 Compare (05h): Supported 00:13:33.083 Write Zeroes (08h): Supported LBA-Change 00:13:33.083 Dataset Management (09h): Supported LBA-Change 00:13:33.083 Copy (19h): Supported LBA-Change 00:13:33.083 00:13:33.083 Error Log 00:13:33.083 ========= 00:13:33.083 00:13:33.083 Arbitration 00:13:33.083 =========== 00:13:33.083 Arbitration Burst: 1 00:13:33.083 00:13:33.083 Power Management 00:13:33.083 ================ 00:13:33.083 Number of Power States: 1 00:13:33.083 Current Power State: Power State #0 00:13:33.083 Power State #0: 00:13:33.083 Max Power: 0.00 W 00:13:33.083 Non-Operational State: Operational 00:13:33.083 Entry Latency: Not Reported 00:13:33.083 Exit Latency: Not Reported 00:13:33.083 Relative Read Throughput: 0 00:13:33.083 Relative Read Latency: 0 00:13:33.083 Relative Write Throughput: 0 00:13:33.083 Relative Write Latency: 0 00:13:33.083 Idle Power: Not Reported 00:13:33.083 Active Power: Not Reported 00:13:33.083 Non-Operational Permissive Mode: Not Supported 00:13:33.083 00:13:33.083 Health Information 00:13:33.083 ================== 00:13:33.083 Critical Warnings: 00:13:33.083 Available Spare Space: OK 00:13:33.083 Temperature: OK 00:13:33.083 Device Reliability: OK 00:13:33.083 Read Only: No 00:13:33.083 Volatile Memory Backup: OK 00:13:33.083 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:33.083 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:33.083 Available Spare: 0% 00:13:33.083 Available Sp[2024-11-20 16:15:04.239570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:33.083 [2024-11-20 16:15:04.239580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:33.083 [2024-11-20 16:15:04.239604] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:33.083 [2024-11-20 16:15:04.239612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.083 [2024-11-20 16:15:04.239618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.083 [2024-11-20 16:15:04.239624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.084 [2024-11-20 16:15:04.239629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.084 [2024-11-20 16:15:04.242209] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:33.084 [2024-11-20 16:15:04.242219] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:33.084 [2024-11-20 16:15:04.242770] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.084 [2024-11-20 16:15:04.242819] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:33.084 [2024-11-20 16:15:04.242825] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:33.084 [2024-11-20 16:15:04.243777] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:33.084 [2024-11-20 16:15:04.243787] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:33.084 [2024-11-20 16:15:04.243836] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:33.084 [2024-11-20 16:15:04.244807] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:33.084 are Threshold: 0% 00:13:33.084 Life Percentage Used: 0% 00:13:33.084 Data Units Read: 0 00:13:33.084 Data Units Written: 0 00:13:33.084 Host Read Commands: 0 00:13:33.084 Host Write Commands: 0 00:13:33.084 Controller Busy Time: 0 minutes 00:13:33.084 Power Cycles: 0 00:13:33.084 Power On Hours: 0 hours 00:13:33.084 Unsafe Shutdowns: 0 00:13:33.084 Unrecoverable Media Errors: 0 00:13:33.084 Lifetime Error Log Entries: 0 00:13:33.084 Warning Temperature Time: 0 minutes 00:13:33.084 Critical Temperature Time: 0 minutes 00:13:33.084 00:13:33.084 Number of Queues 00:13:33.084 ================ 00:13:33.084 Number of I/O Submission Queues: 127 00:13:33.084 Number of I/O Completion Queues: 127 00:13:33.084 00:13:33.084 Active Namespaces 00:13:33.084 ================= 00:13:33.084 Namespace ID:1 00:13:33.084 Error Recovery Timeout: Unlimited 00:13:33.084 Command Set Identifier: NVM (00h) 00:13:33.084 Deallocate: Supported 00:13:33.084 Deallocated/Unwritten Error: Not Supported 00:13:33.084 Deallocated Read Value: Unknown 00:13:33.084 Deallocate in Write Zeroes: Not Supported 00:13:33.084 Deallocated Guard Field: 0xFFFF 00:13:33.084 Flush: Supported 00:13:33.084 Reservation: Supported 00:13:33.084 Namespace Sharing Capabilities: Multiple Controllers 00:13:33.084 Size (in LBAs): 131072 (0GiB) 00:13:33.084 Capacity (in LBAs): 131072 (0GiB) 00:13:33.084 Utilization (in LBAs): 131072 (0GiB) 00:13:33.084 NGUID: 74A7C5AD5C0D4ECB90DED210594FECA1 00:13:33.084 UUID: 74a7c5ad-5c0d-4ecb-90de-d210594feca1 00:13:33.084 Thin Provisioning: Not Supported 00:13:33.084 Per-NS Atomic Units: Yes 00:13:33.084 Atomic Boundary Size (Normal): 0 00:13:33.084 Atomic Boundary Size (PFail): 0 00:13:33.084 Atomic Boundary Offset: 0 00:13:33.084 Maximum Single Source Range Length: 65535 00:13:33.084 Maximum Copy Length: 65535 00:13:33.084 Maximum Source Range Count: 1 00:13:33.084 NGUID/EUI64 Never Reused: No 00:13:33.084 Namespace Write Protected: No 00:13:33.084 Number of LBA Formats: 1 00:13:33.084 Current LBA Format: LBA Format #00 00:13:33.084 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:33.084 00:13:33.084 16:15:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:33.341 [2024-11-20 16:15:04.476022] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:38.600 Initializing NVMe Controllers 00:13:38.600 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:38.600 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:38.600 Initialization complete. Launching workers. 00:13:38.600 ======================================================== 00:13:38.600 Latency(us) 00:13:38.600 Device Information : IOPS MiB/s Average min max 00:13:38.600 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39959.38 156.09 3203.51 945.90 9466.50 00:13:38.600 ======================================================== 00:13:38.600 Total : 39959.38 156.09 3203.51 945.90 9466.50 00:13:38.600 00:13:38.600 [2024-11-20 16:15:09.493985] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:38.600 16:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:38.600 [2024-11-20 16:15:09.727039] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:43.860 Initializing NVMe Controllers 00:13:43.860 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:43.860 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:43.860 Initialization complete. Launching workers. 00:13:43.860 ======================================================== 00:13:43.860 Latency(us) 00:13:43.860 Device Information : IOPS MiB/s Average min max 00:13:43.860 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15968.56 62.38 8015.06 5994.60 15964.31 00:13:43.860 ======================================================== 00:13:43.860 Total : 15968.56 62.38 8015.06 5994.60 15964.31 00:13:43.860 00:13:43.860 [2024-11-20 16:15:14.762276] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:43.860 16:15:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:43.860 [2024-11-20 16:15:14.963250] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:49.119 [2024-11-20 16:15:20.031471] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:49.119 Initializing NVMe Controllers 00:13:49.119 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:49.119 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:49.119 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:49.119 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:49.119 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:49.119 Initialization complete. Launching workers. 00:13:49.119 Starting thread on core 2 00:13:49.119 Starting thread on core 3 00:13:49.119 Starting thread on core 1 00:13:49.119 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:49.119 [2024-11-20 16:15:20.322637] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:53.297 [2024-11-20 16:15:24.075406] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:53.297 Initializing NVMe Controllers 00:13:53.297 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:53.297 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:53.297 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:53.297 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:53.297 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:53.297 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:53.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:53.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:53.297 Initialization complete. Launching workers. 00:13:53.297 Starting thread on core 1 with urgent priority queue 00:13:53.297 Starting thread on core 2 with urgent priority queue 00:13:53.297 Starting thread on core 3 with urgent priority queue 00:13:53.297 Starting thread on core 0 with urgent priority queue 00:13:53.297 SPDK bdev Controller (SPDK1 ) core 0: 7318.00 IO/s 13.66 secs/100000 ios 00:13:53.297 SPDK bdev Controller (SPDK1 ) core 1: 5399.33 IO/s 18.52 secs/100000 ios 00:13:53.297 SPDK bdev Controller (SPDK1 ) core 2: 6908.33 IO/s 14.48 secs/100000 ios 00:13:53.298 SPDK bdev Controller (SPDK1 ) core 3: 5455.33 IO/s 18.33 secs/100000 ios 00:13:53.298 ======================================================== 00:13:53.298 00:13:53.298 16:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:53.298 [2024-11-20 16:15:24.352721] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:53.298 Initializing NVMe Controllers 00:13:53.298 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:53.298 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:53.298 Namespace ID: 1 size: 0GB 00:13:53.298 Initialization complete. 00:13:53.298 INFO: using host memory buffer for IO 00:13:53.298 Hello world! 00:13:53.298 [2024-11-20 16:15:24.386952] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:53.298 16:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:53.555 [2024-11-20 16:15:24.672690] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:54.486 Initializing NVMe Controllers 00:13:54.486 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:54.486 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:54.486 Initialization complete. Launching workers. 00:13:54.486 submit (in ns) avg, min, max = 6534.7, 3161.9, 3999450.5 00:13:54.486 complete (in ns) avg, min, max = 19918.1, 1712.4, 7988578.1 00:13:54.486 00:13:54.486 Submit histogram 00:13:54.486 ================ 00:13:54.486 Range in us Cumulative Count 00:13:54.486 3.154 - 3.170: 0.0060% ( 1) 00:13:54.486 3.170 - 3.185: 0.0120% ( 1) 00:13:54.486 3.185 - 3.200: 0.0240% ( 2) 00:13:54.486 3.200 - 3.215: 0.0600% ( 6) 00:13:54.486 3.215 - 3.230: 0.1621% ( 17) 00:13:54.486 3.230 - 3.246: 0.4203% ( 43) 00:13:54.486 3.246 - 3.261: 1.2670% ( 141) 00:13:54.486 3.261 - 3.276: 4.8159% ( 591) 00:13:54.486 3.276 - 3.291: 10.3705% ( 925) 00:13:54.486 3.291 - 3.307: 16.6877% ( 1052) 00:13:54.486 3.307 - 3.322: 23.8215% ( 1188) 00:13:54.486 3.322 - 3.337: 29.8625% ( 1006) 00:13:54.486 3.337 - 3.352: 35.6452% ( 963) 00:13:54.486 3.352 - 3.368: 41.7582% ( 1018) 00:13:54.486 3.368 - 3.383: 48.0514% ( 1048) 00:13:54.486 3.383 - 3.398: 54.2064% ( 1025) 00:13:54.486 3.398 - 3.413: 59.1965% ( 831) 00:13:54.486 3.413 - 3.429: 67.1711% ( 1328) 00:13:54.486 3.429 - 3.444: 72.8037% ( 938) 00:13:54.486 3.444 - 3.459: 77.6437% ( 806) 00:13:54.486 3.459 - 3.474: 81.9972% ( 725) 00:13:54.486 3.474 - 3.490: 84.6754% ( 446) 00:13:54.486 3.490 - 3.505: 86.5129% ( 306) 00:13:54.486 3.505 - 3.520: 87.1314% ( 103) 00:13:54.486 3.520 - 3.535: 87.5038% ( 62) 00:13:54.486 3.535 - 3.550: 87.8881% ( 64) 00:13:54.486 3.550 - 3.566: 88.2664% ( 63) 00:13:54.486 3.566 - 3.581: 88.8489% ( 97) 00:13:54.486 3.581 - 3.596: 89.8517% ( 167) 00:13:54.486 3.596 - 3.611: 90.8665% ( 169) 00:13:54.486 3.611 - 3.627: 91.7853% ( 153) 00:13:54.486 3.627 - 3.642: 92.7941% ( 168) 00:13:54.486 3.642 - 3.657: 93.7789% ( 164) 00:13:54.486 3.657 - 3.672: 94.7577% ( 163) 00:13:54.486 3.672 - 3.688: 95.8686% ( 185) 00:13:54.486 3.688 - 3.703: 96.7694% ( 150) 00:13:54.486 3.703 - 3.718: 97.4119% ( 107) 00:13:54.486 3.718 - 3.733: 98.0184% ( 101) 00:13:54.486 3.733 - 3.749: 98.5288% ( 85) 00:13:54.486 3.749 - 3.764: 98.8410% ( 52) 00:13:54.486 3.764 - 3.779: 99.1113% ( 45) 00:13:54.486 3.779 - 3.794: 99.3154% ( 34) 00:13:54.486 3.794 - 3.810: 99.4355% ( 20) 00:13:54.486 3.810 - 3.825: 99.5496% ( 19) 00:13:54.486 3.825 - 3.840: 99.5917% ( 7) 00:13:54.486 3.840 - 3.855: 99.6277% ( 6) 00:13:54.486 3.855 - 3.870: 99.6577% ( 5) 00:13:54.486 3.901 - 3.931: 99.6637% ( 1) 00:13:54.486 5.120 - 5.150: 99.6697% ( 1) 00:13:54.486 5.333 - 5.364: 99.6757% ( 1) 00:13:54.486 5.364 - 5.394: 99.6817% ( 1) 00:13:54.486 5.394 - 5.425: 99.6877% ( 1) 00:13:54.486 5.455 - 5.486: 99.6998% ( 2) 00:13:54.486 5.486 - 5.516: 99.7118% ( 2) 00:13:54.486 5.577 - 5.608: 99.7178% ( 1) 00:13:54.486 5.669 - 5.699: 99.7238% ( 1) 00:13:54.486 5.699 - 5.730: 99.7298% ( 1) 00:13:54.486 5.730 - 5.760: 99.7358% ( 1) 00:13:54.486 5.851 - 5.882: 99.7418% ( 1) 00:13:54.486 5.882 - 5.912: 99.7538% ( 2) 00:13:54.486 5.912 - 5.943: 99.7598% ( 1) 00:13:54.486 6.004 - 6.034: 99.7658% ( 1) 00:13:54.486 6.034 - 6.065: 99.7718% ( 1) 00:13:54.486 6.095 - 6.126: 99.7778% ( 1) 00:13:54.486 6.126 - 6.156: 99.7838% ( 1) 00:13:54.486 6.248 - 6.278: 99.7898% ( 1) 00:13:54.486 6.309 - 6.339: 99.7958% ( 1) 00:13:54.486 6.339 - 6.370: 99.8018% ( 1) 00:13:54.486 6.491 - 6.522: 99.8078% ( 1) 00:13:54.486 6.613 - 6.644: 99.8138% ( 1) 00:13:54.486 6.674 - 6.705: 99.8319% ( 3) 00:13:54.486 6.705 - 6.735: 99.8379% ( 1) 00:13:54.486 7.162 - 7.192: 99.8499% ( 2) 00:13:54.486 7.223 - 7.253: 99.8559% ( 1) 00:13:54.486 7.284 - 7.314: 99.8679% ( 2) 00:13:54.486 7.375 - 7.406: 99.8739% ( 1) 00:13:54.486 7.406 - 7.436: 99.8799% ( 1) 00:13:54.486 7.558 - 7.589: 99.8859% ( 1) 00:13:54.486 7.802 - 7.863: 99.8919% ( 1) 00:13:54.486 [2024-11-20 16:15:25.693563] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:54.744 8.046 - 8.107: 99.8979% ( 1) 00:13:54.744 8.350 - 8.411: 99.9039% ( 1) 00:13:54.744 9.204 - 9.265: 99.9099% ( 1) 00:13:54.744 11.703 - 11.764: 99.9159% ( 1) 00:13:54.744 14.385 - 14.446: 99.9219% ( 1) 00:13:54.744 3994.575 - 4025.783: 100.0000% ( 13) 00:13:54.744 00:13:54.744 Complete histogram 00:13:54.744 ================== 00:13:54.744 Range in us Cumulative Count 00:13:54.744 1.707 - 1.714: 0.0060% ( 1) 00:13:54.744 1.745 - 1.752: 0.0120% ( 1) 00:13:54.744 1.752 - 1.760: 0.0180% ( 1) 00:13:54.744 1.760 - 1.768: 0.1681% ( 25) 00:13:54.744 1.768 - 1.775: 0.6065% ( 73) 00:13:54.744 1.775 - 1.783: 1.3091% ( 117) 00:13:54.744 1.783 - 1.790: 2.0056% ( 116) 00:13:54.744 1.790 - 1.798: 2.6061% ( 100) 00:13:54.744 1.798 - 1.806: 3.3207% ( 119) 00:13:54.744 1.806 - 1.813: 11.9798% ( 1442) 00:13:54.744 1.813 - 1.821: 42.9412% ( 5156) 00:13:54.744 1.821 - 1.829: 73.9626% ( 5166) 00:13:54.744 1.829 - 1.836: 85.9905% ( 2003) 00:13:54.744 1.836 - 1.844: 89.7676% ( 629) 00:13:54.744 1.844 - 1.851: 92.5299% ( 460) 00:13:54.744 1.851 - 1.859: 94.0731% ( 257) 00:13:54.744 1.859 - 1.867: 94.6196% ( 91) 00:13:54.744 1.867 - 1.874: 94.9258% ( 51) 00:13:54.744 1.874 - 1.882: 95.3882% ( 77) 00:13:54.744 1.882 - 1.890: 96.3730% ( 164) 00:13:54.744 1.890 - 1.897: 97.5740% ( 200) 00:13:54.744 1.897 - 1.905: 98.5288% ( 159) 00:13:54.744 1.905 - 1.912: 99.0512% ( 87) 00:13:54.744 1.912 - 1.920: 99.2254% ( 29) 00:13:54.744 1.920 - 1.928: 99.3214% ( 16) 00:13:54.744 1.928 - 1.935: 99.3455% ( 4) 00:13:54.744 1.950 - 1.966: 99.3635% ( 3) 00:13:54.744 1.966 - 1.981: 99.3695% ( 1) 00:13:54.744 1.981 - 1.996: 99.3755% ( 1) 00:13:54.744 2.011 - 2.027: 99.3815% ( 1) 00:13:54.744 2.103 - 2.118: 99.3875% ( 1) 00:13:54.744 2.194 - 2.210: 99.3935% ( 1) 00:13:54.744 3.794 - 3.810: 99.4055% ( 2) 00:13:54.744 3.901 - 3.931: 99.4115% ( 1) 00:13:54.744 3.992 - 4.023: 99.4175% ( 1) 00:13:54.744 4.175 - 4.206: 99.4235% ( 1) 00:13:54.744 4.297 - 4.328: 99.4295% ( 1) 00:13:54.744 4.328 - 4.358: 99.4355% ( 1) 00:13:54.744 4.510 - 4.541: 99.4415% ( 1) 00:13:54.744 4.663 - 4.693: 99.4536% ( 2) 00:13:54.744 4.785 - 4.815: 99.4596% ( 1) 00:13:54.744 4.876 - 4.907: 99.4716% ( 2) 00:13:54.744 4.937 - 4.968: 99.4776% ( 1) 00:13:54.744 4.998 - 5.029: 99.4836% ( 1) 00:13:54.744 5.211 - 5.242: 99.4896% ( 1) 00:13:54.744 5.638 - 5.669: 99.4956% ( 1) 00:13:54.744 5.973 - 6.004: 99.5016% ( 1) 00:13:54.744 6.004 - 6.034: 99.5076% ( 1) 00:13:54.744 6.095 - 6.126: 99.5136% ( 1) 00:13:54.744 6.187 - 6.217: 99.5196% ( 1) 00:13:54.744 6.309 - 6.339: 99.5256% ( 1) 00:13:54.744 6.370 - 6.400: 99.5316% ( 1) 00:13:54.744 6.705 - 6.735: 99.5376% ( 1) 00:13:54.744 6.735 - 6.766: 99.5436% ( 1) 00:13:54.744 8.168 - 8.229: 99.5496% ( 1) 00:13:54.744 1654.004 - 1661.806: 99.5556% ( 1) 00:13:54.744 3978.971 - 3994.575: 99.5676% ( 2) 00:13:54.744 3994.575 - 4025.783: 99.9940% ( 71) 00:13:54.744 7957.943 - 7989.150: 100.0000% ( 1) 00:13:54.744 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:54.744 [ 00:13:54.744 { 00:13:54.744 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:54.744 "subtype": "Discovery", 00:13:54.744 "listen_addresses": [], 00:13:54.744 "allow_any_host": true, 00:13:54.744 "hosts": [] 00:13:54.744 }, 00:13:54.744 { 00:13:54.744 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:54.744 "subtype": "NVMe", 00:13:54.744 "listen_addresses": [ 00:13:54.744 { 00:13:54.744 "trtype": "VFIOUSER", 00:13:54.744 "adrfam": "IPv4", 00:13:54.744 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:54.744 "trsvcid": "0" 00:13:54.744 } 00:13:54.744 ], 00:13:54.744 "allow_any_host": true, 00:13:54.744 "hosts": [], 00:13:54.744 "serial_number": "SPDK1", 00:13:54.744 "model_number": "SPDK bdev Controller", 00:13:54.744 "max_namespaces": 32, 00:13:54.744 "min_cntlid": 1, 00:13:54.744 "max_cntlid": 65519, 00:13:54.744 "namespaces": [ 00:13:54.744 { 00:13:54.744 "nsid": 1, 00:13:54.744 "bdev_name": "Malloc1", 00:13:54.744 "name": "Malloc1", 00:13:54.744 "nguid": "74A7C5AD5C0D4ECB90DED210594FECA1", 00:13:54.744 "uuid": "74a7c5ad-5c0d-4ecb-90de-d210594feca1" 00:13:54.744 } 00:13:54.744 ] 00:13:54.744 }, 00:13:54.744 { 00:13:54.744 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:54.744 "subtype": "NVMe", 00:13:54.744 "listen_addresses": [ 00:13:54.744 { 00:13:54.744 "trtype": "VFIOUSER", 00:13:54.744 "adrfam": "IPv4", 00:13:54.744 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:54.744 "trsvcid": "0" 00:13:54.744 } 00:13:54.744 ], 00:13:54.744 "allow_any_host": true, 00:13:54.744 "hosts": [], 00:13:54.744 "serial_number": "SPDK2", 00:13:54.744 "model_number": "SPDK bdev Controller", 00:13:54.744 "max_namespaces": 32, 00:13:54.744 "min_cntlid": 1, 00:13:54.744 "max_cntlid": 65519, 00:13:54.744 "namespaces": [ 00:13:54.744 { 00:13:54.744 "nsid": 1, 00:13:54.744 "bdev_name": "Malloc2", 00:13:54.744 "name": "Malloc2", 00:13:54.744 "nguid": "1610B663147841D4A5AD21976FED021D", 00:13:54.744 "uuid": "1610b663-1478-41d4-a5ad-21976fed021d" 00:13:54.744 } 00:13:54.744 ] 00:13:54.744 } 00:13:54.744 ] 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1886768 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:54.744 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:55.002 [2024-11-20 16:15:26.086583] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:55.002 Malloc3 00:13:55.002 16:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:55.258 [2024-11-20 16:15:26.346498] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:55.258 16:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:55.258 Asynchronous Event Request test 00:13:55.258 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:55.258 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:55.258 Registering asynchronous event callbacks... 00:13:55.258 Starting namespace attribute notice tests for all controllers... 00:13:55.258 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:55.258 aer_cb - Changed Namespace 00:13:55.258 Cleaning up... 00:13:55.516 [ 00:13:55.516 { 00:13:55.516 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:55.516 "subtype": "Discovery", 00:13:55.516 "listen_addresses": [], 00:13:55.516 "allow_any_host": true, 00:13:55.516 "hosts": [] 00:13:55.517 }, 00:13:55.517 { 00:13:55.517 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:55.517 "subtype": "NVMe", 00:13:55.517 "listen_addresses": [ 00:13:55.517 { 00:13:55.517 "trtype": "VFIOUSER", 00:13:55.517 "adrfam": "IPv4", 00:13:55.517 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:55.517 "trsvcid": "0" 00:13:55.517 } 00:13:55.517 ], 00:13:55.517 "allow_any_host": true, 00:13:55.517 "hosts": [], 00:13:55.517 "serial_number": "SPDK1", 00:13:55.517 "model_number": "SPDK bdev Controller", 00:13:55.517 "max_namespaces": 32, 00:13:55.517 "min_cntlid": 1, 00:13:55.517 "max_cntlid": 65519, 00:13:55.517 "namespaces": [ 00:13:55.517 { 00:13:55.517 "nsid": 1, 00:13:55.517 "bdev_name": "Malloc1", 00:13:55.517 "name": "Malloc1", 00:13:55.517 "nguid": "74A7C5AD5C0D4ECB90DED210594FECA1", 00:13:55.517 "uuid": "74a7c5ad-5c0d-4ecb-90de-d210594feca1" 00:13:55.517 }, 00:13:55.517 { 00:13:55.517 "nsid": 2, 00:13:55.517 "bdev_name": "Malloc3", 00:13:55.517 "name": "Malloc3", 00:13:55.517 "nguid": "AC6C606E5B1E40A4830392397AD4AF3C", 00:13:55.517 "uuid": "ac6c606e-5b1e-40a4-8303-92397ad4af3c" 00:13:55.517 } 00:13:55.517 ] 00:13:55.517 }, 00:13:55.517 { 00:13:55.517 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:55.517 "subtype": "NVMe", 00:13:55.517 "listen_addresses": [ 00:13:55.517 { 00:13:55.517 "trtype": "VFIOUSER", 00:13:55.517 "adrfam": "IPv4", 00:13:55.517 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:55.517 "trsvcid": "0" 00:13:55.517 } 00:13:55.517 ], 00:13:55.517 "allow_any_host": true, 00:13:55.517 "hosts": [], 00:13:55.517 "serial_number": "SPDK2", 00:13:55.517 "model_number": "SPDK bdev Controller", 00:13:55.517 "max_namespaces": 32, 00:13:55.517 "min_cntlid": 1, 00:13:55.517 "max_cntlid": 65519, 00:13:55.517 "namespaces": [ 00:13:55.517 { 00:13:55.517 "nsid": 1, 00:13:55.517 "bdev_name": "Malloc2", 00:13:55.517 "name": "Malloc2", 00:13:55.517 "nguid": "1610B663147841D4A5AD21976FED021D", 00:13:55.517 "uuid": "1610b663-1478-41d4-a5ad-21976fed021d" 00:13:55.517 } 00:13:55.517 ] 00:13:55.517 } 00:13:55.517 ] 00:13:55.517 16:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1886768 00:13:55.517 16:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:55.517 16:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:55.517 16:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:55.517 16:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:55.517 [2024-11-20 16:15:26.597655] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:13:55.517 [2024-11-20 16:15:26.597690] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886998 ] 00:13:55.517 [2024-11-20 16:15:26.637531] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:55.517 [2024-11-20 16:15:26.646450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:55.517 [2024-11-20 16:15:26.646473] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f22c8b16000 00:13:55.517 [2024-11-20 16:15:26.647449] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:55.517 [2024-11-20 16:15:26.648456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:55.517 [2024-11-20 16:15:26.649459] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:55.517 [2024-11-20 16:15:26.650467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:55.517 [2024-11-20 16:15:26.651472] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:55.517 [2024-11-20 16:15:26.652480] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:55.517 [2024-11-20 16:15:26.653488] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:55.517 [2024-11-20 16:15:26.654494] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:55.517 [2024-11-20 16:15:26.655499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:55.517 [2024-11-20 16:15:26.655511] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f22c8b0b000 00:13:55.517 [2024-11-20 16:15:26.656424] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:55.517 [2024-11-20 16:15:26.665776] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:55.517 [2024-11-20 16:15:26.665800] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:55.517 [2024-11-20 16:15:26.670878] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:55.517 [2024-11-20 16:15:26.670917] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:55.517 [2024-11-20 16:15:26.670984] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:55.517 [2024-11-20 16:15:26.670996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:55.517 [2024-11-20 16:15:26.671001] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:55.517 [2024-11-20 16:15:26.671885] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:55.517 [2024-11-20 16:15:26.671894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:55.517 [2024-11-20 16:15:26.671901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:55.517 [2024-11-20 16:15:26.672893] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:55.517 [2024-11-20 16:15:26.672902] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:55.517 [2024-11-20 16:15:26.672909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:55.517 [2024-11-20 16:15:26.673898] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:55.517 [2024-11-20 16:15:26.673907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:55.517 [2024-11-20 16:15:26.674904] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:55.517 [2024-11-20 16:15:26.674912] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:55.517 [2024-11-20 16:15:26.674917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:55.517 [2024-11-20 16:15:26.674922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:55.517 [2024-11-20 16:15:26.675030] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:55.517 [2024-11-20 16:15:26.675035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:55.517 [2024-11-20 16:15:26.675039] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:55.518 [2024-11-20 16:15:26.675908] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:55.518 [2024-11-20 16:15:26.676917] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:55.518 [2024-11-20 16:15:26.677928] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:55.518 [2024-11-20 16:15:26.678930] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:55.518 [2024-11-20 16:15:26.678965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:55.518 [2024-11-20 16:15:26.679944] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:55.518 [2024-11-20 16:15:26.679953] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:55.518 [2024-11-20 16:15:26.679957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.679974] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:55.518 [2024-11-20 16:15:26.679980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.679991] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:55.518 [2024-11-20 16:15:26.679996] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:55.518 [2024-11-20 16:15:26.679999] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:55.518 [2024-11-20 16:15:26.680010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:55.518 [2024-11-20 16:15:26.686210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:55.518 [2024-11-20 16:15:26.686221] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:55.518 [2024-11-20 16:15:26.686225] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:55.518 [2024-11-20 16:15:26.686229] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:55.518 [2024-11-20 16:15:26.686233] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:55.518 [2024-11-20 16:15:26.686240] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:55.518 [2024-11-20 16:15:26.686244] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:55.518 [2024-11-20 16:15:26.686248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.686256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.686265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:55.518 [2024-11-20 16:15:26.694206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:55.518 [2024-11-20 16:15:26.694219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:55.518 [2024-11-20 16:15:26.694229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:55.518 [2024-11-20 16:15:26.694237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:55.518 [2024-11-20 16:15:26.694244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:55.518 [2024-11-20 16:15:26.694248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.694254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.694262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:55.518 [2024-11-20 16:15:26.702206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:55.518 [2024-11-20 16:15:26.702216] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:55.518 [2024-11-20 16:15:26.702221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.702226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.702232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.702239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:55.518 [2024-11-20 16:15:26.710207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:55.518 [2024-11-20 16:15:26.710261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.710268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.710275] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:55.518 [2024-11-20 16:15:26.710279] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:55.518 [2024-11-20 16:15:26.710282] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:55.518 [2024-11-20 16:15:26.710288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:55.518 [2024-11-20 16:15:26.718207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:55.518 [2024-11-20 16:15:26.718218] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:55.518 [2024-11-20 16:15:26.718230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.718237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.718243] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:55.518 [2024-11-20 16:15:26.718247] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:55.518 [2024-11-20 16:15:26.718250] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:55.518 [2024-11-20 16:15:26.718258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:55.518 [2024-11-20 16:15:26.726207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:55.518 [2024-11-20 16:15:26.726223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.726230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.726237] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:55.518 [2024-11-20 16:15:26.726241] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:55.518 [2024-11-20 16:15:26.726245] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:55.518 [2024-11-20 16:15:26.726250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:55.518 [2024-11-20 16:15:26.734208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:55.518 [2024-11-20 16:15:26.734218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.734225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.734232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.734237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.734241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.734246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.734250] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:55.518 [2024-11-20 16:15:26.734254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:55.518 [2024-11-20 16:15:26.734259] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:55.518 [2024-11-20 16:15:26.734273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:55.518 [2024-11-20 16:15:26.742207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:55.518 [2024-11-20 16:15:26.742226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:55.776 [2024-11-20 16:15:26.750207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:55.776 [2024-11-20 16:15:26.750219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:55.776 [2024-11-20 16:15:26.758206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:55.776 [2024-11-20 16:15:26.758218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:55.776 [2024-11-20 16:15:26.766210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:55.776 [2024-11-20 16:15:26.766225] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:55.776 [2024-11-20 16:15:26.766230] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:55.776 [2024-11-20 16:15:26.766233] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:55.776 [2024-11-20 16:15:26.766236] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:55.776 [2024-11-20 16:15:26.766239] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:55.776 [2024-11-20 16:15:26.766245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:55.776 [2024-11-20 16:15:26.766252] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:55.776 [2024-11-20 16:15:26.766256] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:55.776 [2024-11-20 16:15:26.766259] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:55.776 [2024-11-20 16:15:26.766264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:55.776 [2024-11-20 16:15:26.766270] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:55.776 [2024-11-20 16:15:26.766274] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:55.776 [2024-11-20 16:15:26.766277] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:55.776 [2024-11-20 16:15:26.766282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:55.776 [2024-11-20 16:15:26.766289] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:55.776 [2024-11-20 16:15:26.766293] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:55.776 [2024-11-20 16:15:26.766296] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:55.776 [2024-11-20 16:15:26.766301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:55.776 [2024-11-20 16:15:26.774208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:55.776 [2024-11-20 16:15:26.774221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:55.776 [2024-11-20 16:15:26.774230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:55.776 [2024-11-20 16:15:26.774236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:55.776 ===================================================== 00:13:55.776 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:55.776 ===================================================== 00:13:55.776 Controller Capabilities/Features 00:13:55.776 ================================ 00:13:55.776 Vendor ID: 4e58 00:13:55.776 Subsystem Vendor ID: 4e58 00:13:55.776 Serial Number: SPDK2 00:13:55.776 Model Number: SPDK bdev Controller 00:13:55.776 Firmware Version: 25.01 00:13:55.776 Recommended Arb Burst: 6 00:13:55.776 IEEE OUI Identifier: 8d 6b 50 00:13:55.776 Multi-path I/O 00:13:55.776 May have multiple subsystem ports: Yes 00:13:55.776 May have multiple controllers: Yes 00:13:55.776 Associated with SR-IOV VF: No 00:13:55.776 Max Data Transfer Size: 131072 00:13:55.776 Max Number of Namespaces: 32 00:13:55.776 Max Number of I/O Queues: 127 00:13:55.776 NVMe Specification Version (VS): 1.3 00:13:55.776 NVMe Specification Version (Identify): 1.3 00:13:55.776 Maximum Queue Entries: 256 00:13:55.776 Contiguous Queues Required: Yes 00:13:55.776 Arbitration Mechanisms Supported 00:13:55.776 Weighted Round Robin: Not Supported 00:13:55.776 Vendor Specific: Not Supported 00:13:55.776 Reset Timeout: 15000 ms 00:13:55.776 Doorbell Stride: 4 bytes 00:13:55.776 NVM Subsystem Reset: Not Supported 00:13:55.776 Command Sets Supported 00:13:55.776 NVM Command Set: Supported 00:13:55.776 Boot Partition: Not Supported 00:13:55.776 Memory Page Size Minimum: 4096 bytes 00:13:55.776 Memory Page Size Maximum: 4096 bytes 00:13:55.776 Persistent Memory Region: Not Supported 00:13:55.776 Optional Asynchronous Events Supported 00:13:55.776 Namespace Attribute Notices: Supported 00:13:55.776 Firmware Activation Notices: Not Supported 00:13:55.776 ANA Change Notices: Not Supported 00:13:55.776 PLE Aggregate Log Change Notices: Not Supported 00:13:55.776 LBA Status Info Alert Notices: Not Supported 00:13:55.776 EGE Aggregate Log Change Notices: Not Supported 00:13:55.776 Normal NVM Subsystem Shutdown event: Not Supported 00:13:55.776 Zone Descriptor Change Notices: Not Supported 00:13:55.776 Discovery Log Change Notices: Not Supported 00:13:55.776 Controller Attributes 00:13:55.776 128-bit Host Identifier: Supported 00:13:55.776 Non-Operational Permissive Mode: Not Supported 00:13:55.776 NVM Sets: Not Supported 00:13:55.776 Read Recovery Levels: Not Supported 00:13:55.776 Endurance Groups: Not Supported 00:13:55.776 Predictable Latency Mode: Not Supported 00:13:55.776 Traffic Based Keep ALive: Not Supported 00:13:55.776 Namespace Granularity: Not Supported 00:13:55.776 SQ Associations: Not Supported 00:13:55.776 UUID List: Not Supported 00:13:55.776 Multi-Domain Subsystem: Not Supported 00:13:55.776 Fixed Capacity Management: Not Supported 00:13:55.776 Variable Capacity Management: Not Supported 00:13:55.776 Delete Endurance Group: Not Supported 00:13:55.776 Delete NVM Set: Not Supported 00:13:55.776 Extended LBA Formats Supported: Not Supported 00:13:55.776 Flexible Data Placement Supported: Not Supported 00:13:55.776 00:13:55.776 Controller Memory Buffer Support 00:13:55.776 ================================ 00:13:55.776 Supported: No 00:13:55.776 00:13:55.776 Persistent Memory Region Support 00:13:55.777 ================================ 00:13:55.777 Supported: No 00:13:55.777 00:13:55.777 Admin Command Set Attributes 00:13:55.777 ============================ 00:13:55.777 Security Send/Receive: Not Supported 00:13:55.777 Format NVM: Not Supported 00:13:55.777 Firmware Activate/Download: Not Supported 00:13:55.777 Namespace Management: Not Supported 00:13:55.777 Device Self-Test: Not Supported 00:13:55.777 Directives: Not Supported 00:13:55.777 NVMe-MI: Not Supported 00:13:55.777 Virtualization Management: Not Supported 00:13:55.777 Doorbell Buffer Config: Not Supported 00:13:55.777 Get LBA Status Capability: Not Supported 00:13:55.777 Command & Feature Lockdown Capability: Not Supported 00:13:55.777 Abort Command Limit: 4 00:13:55.777 Async Event Request Limit: 4 00:13:55.777 Number of Firmware Slots: N/A 00:13:55.777 Firmware Slot 1 Read-Only: N/A 00:13:55.777 Firmware Activation Without Reset: N/A 00:13:55.777 Multiple Update Detection Support: N/A 00:13:55.777 Firmware Update Granularity: No Information Provided 00:13:55.777 Per-Namespace SMART Log: No 00:13:55.777 Asymmetric Namespace Access Log Page: Not Supported 00:13:55.777 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:55.777 Command Effects Log Page: Supported 00:13:55.777 Get Log Page Extended Data: Supported 00:13:55.777 Telemetry Log Pages: Not Supported 00:13:55.777 Persistent Event Log Pages: Not Supported 00:13:55.777 Supported Log Pages Log Page: May Support 00:13:55.777 Commands Supported & Effects Log Page: Not Supported 00:13:55.777 Feature Identifiers & Effects Log Page:May Support 00:13:55.777 NVMe-MI Commands & Effects Log Page: May Support 00:13:55.777 Data Area 4 for Telemetry Log: Not Supported 00:13:55.777 Error Log Page Entries Supported: 128 00:13:55.777 Keep Alive: Supported 00:13:55.777 Keep Alive Granularity: 10000 ms 00:13:55.777 00:13:55.777 NVM Command Set Attributes 00:13:55.777 ========================== 00:13:55.777 Submission Queue Entry Size 00:13:55.777 Max: 64 00:13:55.777 Min: 64 00:13:55.777 Completion Queue Entry Size 00:13:55.777 Max: 16 00:13:55.777 Min: 16 00:13:55.777 Number of Namespaces: 32 00:13:55.777 Compare Command: Supported 00:13:55.777 Write Uncorrectable Command: Not Supported 00:13:55.777 Dataset Management Command: Supported 00:13:55.777 Write Zeroes Command: Supported 00:13:55.777 Set Features Save Field: Not Supported 00:13:55.777 Reservations: Not Supported 00:13:55.777 Timestamp: Not Supported 00:13:55.777 Copy: Supported 00:13:55.777 Volatile Write Cache: Present 00:13:55.777 Atomic Write Unit (Normal): 1 00:13:55.777 Atomic Write Unit (PFail): 1 00:13:55.777 Atomic Compare & Write Unit: 1 00:13:55.777 Fused Compare & Write: Supported 00:13:55.777 Scatter-Gather List 00:13:55.777 SGL Command Set: Supported (Dword aligned) 00:13:55.777 SGL Keyed: Not Supported 00:13:55.777 SGL Bit Bucket Descriptor: Not Supported 00:13:55.777 SGL Metadata Pointer: Not Supported 00:13:55.777 Oversized SGL: Not Supported 00:13:55.777 SGL Metadata Address: Not Supported 00:13:55.777 SGL Offset: Not Supported 00:13:55.777 Transport SGL Data Block: Not Supported 00:13:55.777 Replay Protected Memory Block: Not Supported 00:13:55.777 00:13:55.777 Firmware Slot Information 00:13:55.777 ========================= 00:13:55.777 Active slot: 1 00:13:55.777 Slot 1 Firmware Revision: 25.01 00:13:55.777 00:13:55.777 00:13:55.777 Commands Supported and Effects 00:13:55.777 ============================== 00:13:55.777 Admin Commands 00:13:55.777 -------------- 00:13:55.777 Get Log Page (02h): Supported 00:13:55.777 Identify (06h): Supported 00:13:55.777 Abort (08h): Supported 00:13:55.777 Set Features (09h): Supported 00:13:55.777 Get Features (0Ah): Supported 00:13:55.777 Asynchronous Event Request (0Ch): Supported 00:13:55.777 Keep Alive (18h): Supported 00:13:55.777 I/O Commands 00:13:55.777 ------------ 00:13:55.777 Flush (00h): Supported LBA-Change 00:13:55.777 Write (01h): Supported LBA-Change 00:13:55.777 Read (02h): Supported 00:13:55.777 Compare (05h): Supported 00:13:55.777 Write Zeroes (08h): Supported LBA-Change 00:13:55.777 Dataset Management (09h): Supported LBA-Change 00:13:55.777 Copy (19h): Supported LBA-Change 00:13:55.777 00:13:55.777 Error Log 00:13:55.777 ========= 00:13:55.777 00:13:55.777 Arbitration 00:13:55.777 =========== 00:13:55.777 Arbitration Burst: 1 00:13:55.777 00:13:55.777 Power Management 00:13:55.777 ================ 00:13:55.777 Number of Power States: 1 00:13:55.777 Current Power State: Power State #0 00:13:55.777 Power State #0: 00:13:55.777 Max Power: 0.00 W 00:13:55.777 Non-Operational State: Operational 00:13:55.777 Entry Latency: Not Reported 00:13:55.777 Exit Latency: Not Reported 00:13:55.777 Relative Read Throughput: 0 00:13:55.777 Relative Read Latency: 0 00:13:55.777 Relative Write Throughput: 0 00:13:55.777 Relative Write Latency: 0 00:13:55.777 Idle Power: Not Reported 00:13:55.777 Active Power: Not Reported 00:13:55.777 Non-Operational Permissive Mode: Not Supported 00:13:55.777 00:13:55.777 Health Information 00:13:55.777 ================== 00:13:55.777 Critical Warnings: 00:13:55.777 Available Spare Space: OK 00:13:55.777 Temperature: OK 00:13:55.777 Device Reliability: OK 00:13:55.777 Read Only: No 00:13:55.777 Volatile Memory Backup: OK 00:13:55.777 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:55.777 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:55.777 Available Spare: 0% 00:13:55.777 Available Sp[2024-11-20 16:15:26.774326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:55.777 [2024-11-20 16:15:26.782206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:55.777 [2024-11-20 16:15:26.782236] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:55.777 [2024-11-20 16:15:26.782244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.777 [2024-11-20 16:15:26.782250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.777 [2024-11-20 16:15:26.782257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.777 [2024-11-20 16:15:26.782263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.777 [2024-11-20 16:15:26.782310] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:55.777 [2024-11-20 16:15:26.782320] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:55.777 [2024-11-20 16:15:26.783319] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:55.777 [2024-11-20 16:15:26.783361] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:55.777 [2024-11-20 16:15:26.783367] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:55.777 [2024-11-20 16:15:26.784321] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:55.777 [2024-11-20 16:15:26.784332] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:55.777 [2024-11-20 16:15:26.784377] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:55.777 [2024-11-20 16:15:26.787206] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:55.777 are Threshold: 0% 00:13:55.777 Life Percentage Used: 0% 00:13:55.777 Data Units Read: 0 00:13:55.777 Data Units Written: 0 00:13:55.777 Host Read Commands: 0 00:13:55.777 Host Write Commands: 0 00:13:55.777 Controller Busy Time: 0 minutes 00:13:55.777 Power Cycles: 0 00:13:55.777 Power On Hours: 0 hours 00:13:55.777 Unsafe Shutdowns: 0 00:13:55.777 Unrecoverable Media Errors: 0 00:13:55.777 Lifetime Error Log Entries: 0 00:13:55.777 Warning Temperature Time: 0 minutes 00:13:55.777 Critical Temperature Time: 0 minutes 00:13:55.777 00:13:55.777 Number of Queues 00:13:55.777 ================ 00:13:55.777 Number of I/O Submission Queues: 127 00:13:55.777 Number of I/O Completion Queues: 127 00:13:55.777 00:13:55.777 Active Namespaces 00:13:55.777 ================= 00:13:55.777 Namespace ID:1 00:13:55.777 Error Recovery Timeout: Unlimited 00:13:55.777 Command Set Identifier: NVM (00h) 00:13:55.777 Deallocate: Supported 00:13:55.777 Deallocated/Unwritten Error: Not Supported 00:13:55.777 Deallocated Read Value: Unknown 00:13:55.777 Deallocate in Write Zeroes: Not Supported 00:13:55.777 Deallocated Guard Field: 0xFFFF 00:13:55.777 Flush: Supported 00:13:55.777 Reservation: Supported 00:13:55.777 Namespace Sharing Capabilities: Multiple Controllers 00:13:55.777 Size (in LBAs): 131072 (0GiB) 00:13:55.778 Capacity (in LBAs): 131072 (0GiB) 00:13:55.778 Utilization (in LBAs): 131072 (0GiB) 00:13:55.778 NGUID: 1610B663147841D4A5AD21976FED021D 00:13:55.778 UUID: 1610b663-1478-41d4-a5ad-21976fed021d 00:13:55.778 Thin Provisioning: Not Supported 00:13:55.778 Per-NS Atomic Units: Yes 00:13:55.778 Atomic Boundary Size (Normal): 0 00:13:55.778 Atomic Boundary Size (PFail): 0 00:13:55.778 Atomic Boundary Offset: 0 00:13:55.778 Maximum Single Source Range Length: 65535 00:13:55.778 Maximum Copy Length: 65535 00:13:55.778 Maximum Source Range Count: 1 00:13:55.778 NGUID/EUI64 Never Reused: No 00:13:55.778 Namespace Write Protected: No 00:13:55.778 Number of LBA Formats: 1 00:13:55.778 Current LBA Format: LBA Format #00 00:13:55.778 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:55.778 00:13:55.778 16:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:56.034 [2024-11-20 16:15:27.017452] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:01.291 Initializing NVMe Controllers 00:14:01.291 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:01.291 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:01.291 Initialization complete. Launching workers. 00:14:01.291 ======================================================== 00:14:01.291 Latency(us) 00:14:01.291 Device Information : IOPS MiB/s Average min max 00:14:01.291 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39943.98 156.03 3204.83 942.83 7611.97 00:14:01.291 ======================================================== 00:14:01.291 Total : 39943.98 156.03 3204.83 942.83 7611.97 00:14:01.291 00:14:01.291 [2024-11-20 16:15:32.118459] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:01.291 16:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:01.291 [2024-11-20 16:15:32.352239] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:06.550 Initializing NVMe Controllers 00:14:06.550 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:06.550 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:06.550 Initialization complete. Launching workers. 00:14:06.550 ======================================================== 00:14:06.550 Latency(us) 00:14:06.550 Device Information : IOPS MiB/s Average min max 00:14:06.550 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39913.68 155.91 3206.76 969.17 9593.63 00:14:06.550 ======================================================== 00:14:06.550 Total : 39913.68 155.91 3206.76 969.17 9593.63 00:14:06.550 00:14:06.550 [2024-11-20 16:15:37.371717] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:06.550 16:15:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:06.550 [2024-11-20 16:15:37.581974] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:11.811 [2024-11-20 16:15:42.708306] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:11.811 Initializing NVMe Controllers 00:14:11.811 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:11.811 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:11.811 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:11.811 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:11.811 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:11.811 Initialization complete. Launching workers. 00:14:11.811 Starting thread on core 2 00:14:11.811 Starting thread on core 3 00:14:11.811 Starting thread on core 1 00:14:11.811 16:15:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:11.811 [2024-11-20 16:15:43.005640] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:15.090 [2024-11-20 16:15:46.074335] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:15.090 Initializing NVMe Controllers 00:14:15.090 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:15.090 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:15.090 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:15.090 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:15.090 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:15.090 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:15.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:15.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:15.091 Initialization complete. Launching workers. 00:14:15.091 Starting thread on core 1 with urgent priority queue 00:14:15.091 Starting thread on core 2 with urgent priority queue 00:14:15.091 Starting thread on core 3 with urgent priority queue 00:14:15.091 Starting thread on core 0 with urgent priority queue 00:14:15.091 SPDK bdev Controller (SPDK2 ) core 0: 7968.33 IO/s 12.55 secs/100000 ios 00:14:15.091 SPDK bdev Controller (SPDK2 ) core 1: 8088.00 IO/s 12.36 secs/100000 ios 00:14:15.091 SPDK bdev Controller (SPDK2 ) core 2: 9616.33 IO/s 10.40 secs/100000 ios 00:14:15.091 SPDK bdev Controller (SPDK2 ) core 3: 8963.00 IO/s 11.16 secs/100000 ios 00:14:15.091 ======================================================== 00:14:15.091 00:14:15.091 16:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:15.348 [2024-11-20 16:15:46.356753] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:15.348 Initializing NVMe Controllers 00:14:15.348 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:15.348 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:15.348 Namespace ID: 1 size: 0GB 00:14:15.348 Initialization complete. 00:14:15.348 INFO: using host memory buffer for IO 00:14:15.348 Hello world! 00:14:15.348 [2024-11-20 16:15:46.366811] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:15.348 16:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:15.606 [2024-11-20 16:15:46.647914] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:16.537 Initializing NVMe Controllers 00:14:16.537 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:16.537 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:16.537 Initialization complete. Launching workers. 00:14:16.537 submit (in ns) avg, min, max = 6664.1, 3139.0, 3999478.1 00:14:16.537 complete (in ns) avg, min, max = 18319.0, 1719.0, 4994779.0 00:14:16.537 00:14:16.537 Submit histogram 00:14:16.537 ================ 00:14:16.537 Range in us Cumulative Count 00:14:16.537 3.139 - 3.154: 0.0120% ( 2) 00:14:16.537 3.154 - 3.170: 0.0300% ( 3) 00:14:16.537 3.170 - 3.185: 0.0601% ( 5) 00:14:16.537 3.185 - 3.200: 0.4864% ( 71) 00:14:16.537 3.200 - 3.215: 2.5222% ( 339) 00:14:16.537 3.215 - 3.230: 7.7288% ( 867) 00:14:16.537 3.230 - 3.246: 13.1696% ( 906) 00:14:16.538 3.246 - 3.261: 19.6373% ( 1077) 00:14:16.538 3.261 - 3.276: 26.8736% ( 1205) 00:14:16.538 3.276 - 3.291: 32.7408% ( 977) 00:14:16.538 3.291 - 3.307: 38.1095% ( 894) 00:14:16.538 3.307 - 3.322: 43.5323% ( 903) 00:14:16.538 3.322 - 3.337: 49.4415% ( 984) 00:14:16.538 3.337 - 3.352: 54.5820% ( 856) 00:14:16.538 3.352 - 3.368: 60.6233% ( 1006) 00:14:16.538 3.368 - 3.383: 69.3610% ( 1455) 00:14:16.538 3.383 - 3.398: 74.7298% ( 894) 00:14:16.538 3.398 - 3.413: 79.3418% ( 768) 00:14:16.538 3.413 - 3.429: 82.9450% ( 600) 00:14:16.538 3.429 - 3.444: 85.3651% ( 403) 00:14:16.538 3.444 - 3.459: 86.8004% ( 239) 00:14:16.538 3.459 - 3.474: 87.3649% ( 94) 00:14:16.538 3.474 - 3.490: 87.6832% ( 53) 00:14:16.538 3.490 - 3.505: 88.0195% ( 56) 00:14:16.538 3.505 - 3.520: 88.5900% ( 95) 00:14:16.538 3.520 - 3.535: 89.2505% ( 110) 00:14:16.538 3.535 - 3.550: 90.2594% ( 168) 00:14:16.538 3.550 - 3.566: 91.2143% ( 159) 00:14:16.538 3.566 - 3.581: 92.2352% ( 170) 00:14:16.538 3.581 - 3.596: 93.1360% ( 150) 00:14:16.538 3.596 - 3.611: 94.2109% ( 179) 00:14:16.538 3.611 - 3.627: 95.2979% ( 181) 00:14:16.538 3.627 - 3.642: 96.4088% ( 185) 00:14:16.538 3.642 - 3.657: 97.2616% ( 142) 00:14:16.538 3.657 - 3.672: 97.9222% ( 110) 00:14:16.538 3.672 - 3.688: 98.3005% ( 63) 00:14:16.538 3.688 - 3.703: 98.7029% ( 67) 00:14:16.538 3.703 - 3.718: 98.9671% ( 44) 00:14:16.538 3.718 - 3.733: 99.1533% ( 31) 00:14:16.538 3.733 - 3.749: 99.3995% ( 41) 00:14:16.538 3.749 - 3.764: 99.4956% ( 16) 00:14:16.538 3.764 - 3.779: 99.5736% ( 13) 00:14:16.538 3.779 - 3.794: 99.6157% ( 7) 00:14:16.538 3.794 - 3.810: 99.6397% ( 4) 00:14:16.538 3.810 - 3.825: 99.6577% ( 3) 00:14:16.538 3.825 - 3.840: 99.6697% ( 2) 00:14:16.538 3.840 - 3.855: 99.6757% ( 1) 00:14:16.538 3.886 - 3.901: 99.6817% ( 1) 00:14:16.538 4.023 - 4.053: 99.6877% ( 1) 00:14:16.538 5.272 - 5.303: 99.7057% ( 3) 00:14:16.538 5.547 - 5.577: 99.7117% ( 1) 00:14:16.538 5.638 - 5.669: 99.7238% ( 2) 00:14:16.538 5.669 - 5.699: 99.7298% ( 1) 00:14:16.538 5.699 - 5.730: 99.7358% ( 1) 00:14:16.538 5.821 - 5.851: 99.7418% ( 1) 00:14:16.538 5.851 - 5.882: 99.7478% ( 1) 00:14:16.538 5.943 - 5.973: 99.7538% ( 1) 00:14:16.538 6.217 - 6.248: 99.7598% ( 1) 00:14:16.538 6.552 - 6.583: 99.7718% ( 2) 00:14:16.538 6.613 - 6.644: 99.7778% ( 1) 00:14:16.538 6.705 - 6.735: 99.7898% ( 2) 00:14:16.538 6.735 - 6.766: 99.7958% ( 1) 00:14:16.538 6.796 - 6.827: 99.8018% ( 1) 00:14:16.538 6.979 - 7.010: 99.8078% ( 1) 00:14:16.538 7.070 - 7.101: 99.8138% ( 1) 00:14:16.538 7.162 - 7.192: 99.8198% ( 1) 00:14:16.538 7.192 - 7.223: 99.8258% ( 1) 00:14:16.538 7.253 - 7.284: 99.8319% ( 1) 00:14:16.538 7.284 - 7.314: 99.8379% ( 1) 00:14:16.538 7.314 - 7.345: 99.8439% ( 1) 00:14:16.538 7.345 - 7.375: 99.8499% ( 1) 00:14:16.538 7.436 - 7.467: 99.8559% ( 1) 00:14:16.538 7.528 - 7.558: 99.8619% ( 1) 00:14:16.538 7.650 - 7.680: 99.8679% ( 1) 00:14:16.538 7.802 - 7.863: 99.8739% ( 1) 00:14:16.538 8.229 - 8.290: 99.8919% ( 3) 00:14:16.538 8.290 - 8.350: 99.8979% ( 1) 00:14:16.538 8.411 - 8.472: 99.9039% ( 1) 00:14:16.538 8.655 - 8.716: 99.9099% ( 1) 00:14:16.538 [2024-11-20 16:15:47.748186] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:16.796 11.764 - 11.825: 99.9159% ( 1) 00:14:16.796 3011.535 - 3027.139: 99.9219% ( 1) 00:14:16.796 3994.575 - 4025.783: 100.0000% ( 13) 00:14:16.796 00:14:16.796 Complete histogram 00:14:16.796 ================== 00:14:16.796 Range in us Cumulative Count 00:14:16.796 1.714 - 1.722: 0.0300% ( 5) 00:14:16.796 1.722 - 1.730: 0.0721% ( 7) 00:14:16.796 1.730 - 1.737: 0.1081% ( 6) 00:14:16.796 1.745 - 1.752: 0.1141% ( 1) 00:14:16.796 1.752 - 1.760: 0.3063% ( 32) 00:14:16.796 1.760 - 1.768: 4.6241% ( 719) 00:14:16.796 1.768 - 1.775: 23.8170% ( 3196) 00:14:16.796 1.775 - 1.783: 45.5020% ( 3611) 00:14:16.796 1.783 - 1.790: 53.0687% ( 1260) 00:14:16.796 1.790 - 1.798: 55.4708% ( 400) 00:14:16.796 1.798 - 1.806: 57.3385% ( 311) 00:14:16.796 1.806 - 1.813: 59.9568% ( 436) 00:14:16.796 1.813 - 1.821: 70.1357% ( 1695) 00:14:16.796 1.821 - 1.829: 84.6325% ( 2414) 00:14:16.796 1.829 - 1.836: 92.0010% ( 1227) 00:14:16.796 1.836 - 1.844: 94.5892% ( 431) 00:14:16.796 1.844 - 1.851: 96.2347% ( 274) 00:14:16.796 1.851 - 1.859: 97.4838% ( 208) 00:14:16.796 1.859 - 1.867: 97.9702% ( 81) 00:14:16.796 1.867 - 1.874: 98.1684% ( 33) 00:14:16.796 1.874 - 1.882: 98.3485% ( 30) 00:14:16.796 1.882 - 1.890: 98.5227% ( 29) 00:14:16.796 1.890 - 1.897: 98.6848% ( 27) 00:14:16.796 1.897 - 1.905: 98.9070% ( 37) 00:14:16.796 1.905 - 1.912: 99.0512% ( 24) 00:14:16.796 1.912 - 1.920: 99.1533% ( 17) 00:14:16.796 1.920 - 1.928: 99.1953% ( 7) 00:14:16.796 1.928 - 1.935: 99.2313% ( 6) 00:14:16.796 1.935 - 1.943: 99.2914% ( 10) 00:14:16.796 1.943 - 1.950: 99.3154% ( 4) 00:14:16.796 1.950 - 1.966: 99.3454% ( 5) 00:14:16.796 1.966 - 1.981: 99.3514% ( 1) 00:14:16.796 1.981 - 1.996: 99.3755% ( 4) 00:14:16.796 2.011 - 2.027: 99.3815% ( 1) 00:14:16.796 2.027 - 2.042: 99.3875% ( 1) 00:14:16.796 2.042 - 2.057: 99.3935% ( 1) 00:14:16.796 2.088 - 2.103: 99.3995% ( 1) 00:14:16.796 2.362 - 2.377: 99.4055% ( 1) 00:14:16.796 3.733 - 3.749: 99.4115% ( 1) 00:14:16.797 3.962 - 3.992: 99.4175% ( 1) 00:14:16.797 3.992 - 4.023: 99.4235% ( 1) 00:14:16.797 4.114 - 4.145: 99.4295% ( 1) 00:14:16.797 4.175 - 4.206: 99.4355% ( 1) 00:14:16.797 4.419 - 4.450: 99.4415% ( 1) 00:14:16.797 4.510 - 4.541: 99.4475% ( 1) 00:14:16.797 4.541 - 4.571: 99.4535% ( 1) 00:14:16.797 4.602 - 4.632: 99.4595% ( 1) 00:14:16.797 4.632 - 4.663: 99.4655% ( 1) 00:14:16.797 4.663 - 4.693: 99.4715% ( 1) 00:14:16.797 4.907 - 4.937: 99.4835% ( 2) 00:14:16.797 4.937 - 4.968: 99.4896% ( 1) 00:14:16.797 4.968 - 4.998: 99.4956% ( 1) 00:14:16.797 5.059 - 5.090: 99.5016% ( 1) 00:14:16.797 5.608 - 5.638: 99.5076% ( 1) 00:14:16.797 5.821 - 5.851: 99.5136% ( 1) 00:14:16.797 5.912 - 5.943: 99.5196% ( 1) 00:14:16.797 6.217 - 6.248: 99.5256% ( 1) 00:14:16.797 6.370 - 6.400: 99.5316% ( 1) 00:14:16.797 6.522 - 6.552: 99.5376% ( 1) 00:14:16.797 6.735 - 6.766: 99.5436% ( 1) 00:14:16.797 8.046 - 8.107: 99.5496% ( 1) 00:14:16.797 8.107 - 8.168: 99.5556% ( 1) 00:14:16.797 8.655 - 8.716: 99.5616% ( 1) 00:14:16.797 11.947 - 12.008: 99.5676% ( 1) 00:14:16.797 12.312 - 12.373: 99.5736% ( 1) 00:14:16.797 16.091 - 16.213: 99.5796% ( 1) 00:14:16.797 183.345 - 184.320: 99.5856% ( 1) 00:14:16.797 3011.535 - 3027.139: 99.5916% ( 1) 00:14:16.797 3198.781 - 3214.385: 99.5976% ( 1) 00:14:16.797 3916.556 - 3932.160: 99.6037% ( 1) 00:14:16.797 3994.575 - 4025.783: 99.9940% ( 65) 00:14:16.797 4993.219 - 5024.427: 100.0000% ( 1) 00:14:16.797 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:16.797 [ 00:14:16.797 { 00:14:16.797 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:16.797 "subtype": "Discovery", 00:14:16.797 "listen_addresses": [], 00:14:16.797 "allow_any_host": true, 00:14:16.797 "hosts": [] 00:14:16.797 }, 00:14:16.797 { 00:14:16.797 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:16.797 "subtype": "NVMe", 00:14:16.797 "listen_addresses": [ 00:14:16.797 { 00:14:16.797 "trtype": "VFIOUSER", 00:14:16.797 "adrfam": "IPv4", 00:14:16.797 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:16.797 "trsvcid": "0" 00:14:16.797 } 00:14:16.797 ], 00:14:16.797 "allow_any_host": true, 00:14:16.797 "hosts": [], 00:14:16.797 "serial_number": "SPDK1", 00:14:16.797 "model_number": "SPDK bdev Controller", 00:14:16.797 "max_namespaces": 32, 00:14:16.797 "min_cntlid": 1, 00:14:16.797 "max_cntlid": 65519, 00:14:16.797 "namespaces": [ 00:14:16.797 { 00:14:16.797 "nsid": 1, 00:14:16.797 "bdev_name": "Malloc1", 00:14:16.797 "name": "Malloc1", 00:14:16.797 "nguid": "74A7C5AD5C0D4ECB90DED210594FECA1", 00:14:16.797 "uuid": "74a7c5ad-5c0d-4ecb-90de-d210594feca1" 00:14:16.797 }, 00:14:16.797 { 00:14:16.797 "nsid": 2, 00:14:16.797 "bdev_name": "Malloc3", 00:14:16.797 "name": "Malloc3", 00:14:16.797 "nguid": "AC6C606E5B1E40A4830392397AD4AF3C", 00:14:16.797 "uuid": "ac6c606e-5b1e-40a4-8303-92397ad4af3c" 00:14:16.797 } 00:14:16.797 ] 00:14:16.797 }, 00:14:16.797 { 00:14:16.797 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:16.797 "subtype": "NVMe", 00:14:16.797 "listen_addresses": [ 00:14:16.797 { 00:14:16.797 "trtype": "VFIOUSER", 00:14:16.797 "adrfam": "IPv4", 00:14:16.797 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:16.797 "trsvcid": "0" 00:14:16.797 } 00:14:16.797 ], 00:14:16.797 "allow_any_host": true, 00:14:16.797 "hosts": [], 00:14:16.797 "serial_number": "SPDK2", 00:14:16.797 "model_number": "SPDK bdev Controller", 00:14:16.797 "max_namespaces": 32, 00:14:16.797 "min_cntlid": 1, 00:14:16.797 "max_cntlid": 65519, 00:14:16.797 "namespaces": [ 00:14:16.797 { 00:14:16.797 "nsid": 1, 00:14:16.797 "bdev_name": "Malloc2", 00:14:16.797 "name": "Malloc2", 00:14:16.797 "nguid": "1610B663147841D4A5AD21976FED021D", 00:14:16.797 "uuid": "1610b663-1478-41d4-a5ad-21976fed021d" 00:14:16.797 } 00:14:16.797 ] 00:14:16.797 } 00:14:16.797 ] 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1890447 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:16.797 16:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:17.055 [2024-11-20 16:15:48.142633] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:17.055 Malloc4 00:14:17.055 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:17.312 [2024-11-20 16:15:48.369338] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:17.312 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:17.312 Asynchronous Event Request test 00:14:17.312 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:17.312 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:17.312 Registering asynchronous event callbacks... 00:14:17.312 Starting namespace attribute notice tests for all controllers... 00:14:17.312 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:17.312 aer_cb - Changed Namespace 00:14:17.312 Cleaning up... 00:14:17.571 [ 00:14:17.571 { 00:14:17.571 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:17.571 "subtype": "Discovery", 00:14:17.571 "listen_addresses": [], 00:14:17.571 "allow_any_host": true, 00:14:17.571 "hosts": [] 00:14:17.571 }, 00:14:17.571 { 00:14:17.571 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:17.571 "subtype": "NVMe", 00:14:17.571 "listen_addresses": [ 00:14:17.571 { 00:14:17.571 "trtype": "VFIOUSER", 00:14:17.571 "adrfam": "IPv4", 00:14:17.571 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:17.571 "trsvcid": "0" 00:14:17.571 } 00:14:17.571 ], 00:14:17.571 "allow_any_host": true, 00:14:17.571 "hosts": [], 00:14:17.571 "serial_number": "SPDK1", 00:14:17.571 "model_number": "SPDK bdev Controller", 00:14:17.571 "max_namespaces": 32, 00:14:17.571 "min_cntlid": 1, 00:14:17.571 "max_cntlid": 65519, 00:14:17.571 "namespaces": [ 00:14:17.571 { 00:14:17.571 "nsid": 1, 00:14:17.571 "bdev_name": "Malloc1", 00:14:17.571 "name": "Malloc1", 00:14:17.571 "nguid": "74A7C5AD5C0D4ECB90DED210594FECA1", 00:14:17.571 "uuid": "74a7c5ad-5c0d-4ecb-90de-d210594feca1" 00:14:17.571 }, 00:14:17.571 { 00:14:17.571 "nsid": 2, 00:14:17.571 "bdev_name": "Malloc3", 00:14:17.571 "name": "Malloc3", 00:14:17.571 "nguid": "AC6C606E5B1E40A4830392397AD4AF3C", 00:14:17.571 "uuid": "ac6c606e-5b1e-40a4-8303-92397ad4af3c" 00:14:17.571 } 00:14:17.571 ] 00:14:17.571 }, 00:14:17.571 { 00:14:17.571 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:17.571 "subtype": "NVMe", 00:14:17.571 "listen_addresses": [ 00:14:17.571 { 00:14:17.571 "trtype": "VFIOUSER", 00:14:17.571 "adrfam": "IPv4", 00:14:17.571 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:17.571 "trsvcid": "0" 00:14:17.571 } 00:14:17.571 ], 00:14:17.571 "allow_any_host": true, 00:14:17.571 "hosts": [], 00:14:17.571 "serial_number": "SPDK2", 00:14:17.571 "model_number": "SPDK bdev Controller", 00:14:17.571 "max_namespaces": 32, 00:14:17.571 "min_cntlid": 1, 00:14:17.571 "max_cntlid": 65519, 00:14:17.571 "namespaces": [ 00:14:17.571 { 00:14:17.571 "nsid": 1, 00:14:17.571 "bdev_name": "Malloc2", 00:14:17.571 "name": "Malloc2", 00:14:17.571 "nguid": "1610B663147841D4A5AD21976FED021D", 00:14:17.571 "uuid": "1610b663-1478-41d4-a5ad-21976fed021d" 00:14:17.571 }, 00:14:17.571 { 00:14:17.571 "nsid": 2, 00:14:17.571 "bdev_name": "Malloc4", 00:14:17.571 "name": "Malloc4", 00:14:17.571 "nguid": "D02FE647EADC48F2A2EE8F339B68E908", 00:14:17.571 "uuid": "d02fe647-eadc-48f2-a2ee-8f339b68e908" 00:14:17.571 } 00:14:17.571 ] 00:14:17.571 } 00:14:17.571 ] 00:14:17.571 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1890447 00:14:17.571 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:17.571 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1882268 00:14:17.571 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1882268 ']' 00:14:17.571 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1882268 00:14:17.571 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:17.571 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.571 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1882268 00:14:17.571 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.571 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.571 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1882268' 00:14:17.571 killing process with pid 1882268 00:14:17.571 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1882268 00:14:17.571 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1882268 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1890684 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1890684' 00:14:17.831 Process pid: 1890684 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1890684 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1890684 ']' 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.831 16:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:17.831 [2024-11-20 16:15:48.936542] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:17.831 [2024-11-20 16:15:48.937369] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:14:17.831 [2024-11-20 16:15:48.937410] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.831 [2024-11-20 16:15:48.997770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.831 [2024-11-20 16:15:49.040433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.831 [2024-11-20 16:15:49.040468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.831 [2024-11-20 16:15:49.040475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.831 [2024-11-20 16:15:49.040481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.831 [2024-11-20 16:15:49.040487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.831 [2024-11-20 16:15:49.041938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.831 [2024-11-20 16:15:49.041980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.831 [2024-11-20 16:15:49.042103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.831 [2024-11-20 16:15:49.042104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.090 [2024-11-20 16:15:49.113068] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:18.090 [2024-11-20 16:15:49.113514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:18.090 [2024-11-20 16:15:49.114075] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:18.090 [2024-11-20 16:15:49.114216] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:18.090 [2024-11-20 16:15:49.114342] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:18.090 16:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.090 16:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:18.090 16:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:19.027 16:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:19.287 16:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:19.287 16:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:19.287 16:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:19.287 16:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:19.287 16:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:19.546 Malloc1 00:14:19.546 16:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:19.803 16:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:19.803 16:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:20.060 16:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:20.060 16:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:20.060 16:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:20.317 Malloc2 00:14:20.317 16:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:20.575 16:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:20.575 16:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:20.832 16:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:20.832 16:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1890684 00:14:20.832 16:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1890684 ']' 00:14:20.832 16:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1890684 00:14:20.832 16:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:20.832 16:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.832 16:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1890684 00:14:20.832 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:20.832 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:20.832 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1890684' 00:14:20.832 killing process with pid 1890684 00:14:20.832 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1890684 00:14:20.832 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1890684 00:14:21.091 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:21.091 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:21.091 00:14:21.091 real 0m51.464s 00:14:21.091 user 3m19.336s 00:14:21.091 sys 0m3.182s 00:14:21.091 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.091 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:21.091 ************************************ 00:14:21.091 END TEST nvmf_vfio_user 00:14:21.091 ************************************ 00:14:21.091 16:15:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:21.091 16:15:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:21.091 16:15:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.091 16:15:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.091 ************************************ 00:14:21.091 START TEST nvmf_vfio_user_nvme_compliance 00:14:21.091 ************************************ 00:14:21.091 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:21.351 * Looking for test storage... 00:14:21.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.351 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:21.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.352 --rc genhtml_branch_coverage=1 00:14:21.352 --rc genhtml_function_coverage=1 00:14:21.352 --rc genhtml_legend=1 00:14:21.352 --rc geninfo_all_blocks=1 00:14:21.352 --rc geninfo_unexecuted_blocks=1 00:14:21.352 00:14:21.352 ' 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:21.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.352 --rc genhtml_branch_coverage=1 00:14:21.352 --rc genhtml_function_coverage=1 00:14:21.352 --rc genhtml_legend=1 00:14:21.352 --rc geninfo_all_blocks=1 00:14:21.352 --rc geninfo_unexecuted_blocks=1 00:14:21.352 00:14:21.352 ' 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:21.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.352 --rc genhtml_branch_coverage=1 00:14:21.352 --rc genhtml_function_coverage=1 00:14:21.352 --rc genhtml_legend=1 00:14:21.352 --rc geninfo_all_blocks=1 00:14:21.352 --rc geninfo_unexecuted_blocks=1 00:14:21.352 00:14:21.352 ' 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:21.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.352 --rc genhtml_branch_coverage=1 00:14:21.352 --rc genhtml_function_coverage=1 00:14:21.352 --rc genhtml_legend=1 00:14:21.352 --rc geninfo_all_blocks=1 00:14:21.352 --rc geninfo_unexecuted_blocks=1 00:14:21.352 00:14:21.352 ' 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:21.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1891235 00:14:21.352 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1891235' 00:14:21.353 Process pid: 1891235 00:14:21.353 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:21.353 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:21.353 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1891235 00:14:21.353 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1891235 ']' 00:14:21.353 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.353 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.353 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.353 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.353 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:21.353 [2024-11-20 16:15:52.561171] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:14:21.353 [2024-11-20 16:15:52.561225] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.612 [2024-11-20 16:15:52.634424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:21.612 [2024-11-20 16:15:52.673521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.612 [2024-11-20 16:15:52.673556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.612 [2024-11-20 16:15:52.673564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.612 [2024-11-20 16:15:52.673569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.612 [2024-11-20 16:15:52.673575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.612 [2024-11-20 16:15:52.674908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.612 [2024-11-20 16:15:52.675013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.612 [2024-11-20 16:15:52.675015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.612 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.612 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:21.612 16:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:22.988 malloc0 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.988 16:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:22.988 00:14:22.988 00:14:22.988 CUnit - A unit testing framework for C - Version 2.1-3 00:14:22.988 http://cunit.sourceforge.net/ 00:14:22.988 00:14:22.988 00:14:22.988 Suite: nvme_compliance 00:14:22.988 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 16:15:54.013661] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.988 [2024-11-20 16:15:54.015010] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:22.988 [2024-11-20 16:15:54.015029] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:22.988 [2024-11-20 16:15:54.015036] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:22.988 [2024-11-20 16:15:54.016681] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.988 passed 00:14:22.988 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 16:15:54.096237] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.988 [2024-11-20 16:15:54.099266] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.988 passed 00:14:22.988 Test: admin_identify_ns ...[2024-11-20 16:15:54.178490] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:23.246 [2024-11-20 16:15:54.239214] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:23.246 [2024-11-20 16:15:54.247218] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:23.246 [2024-11-20 16:15:54.268299] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:23.246 passed 00:14:23.246 Test: admin_get_features_mandatory_features ...[2024-11-20 16:15:54.342059] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:23.246 [2024-11-20 16:15:54.345084] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:23.246 passed 00:14:23.246 Test: admin_get_features_optional_features ...[2024-11-20 16:15:54.422641] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:23.246 [2024-11-20 16:15:54.425659] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:23.246 passed 00:14:23.505 Test: admin_set_features_number_of_queues ...[2024-11-20 16:15:54.504420] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:23.505 [2024-11-20 16:15:54.610282] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:23.505 passed 00:14:23.505 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 16:15:54.685958] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:23.505 [2024-11-20 16:15:54.688990] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:23.505 passed 00:14:23.763 Test: admin_get_log_page_with_lpo ...[2024-11-20 16:15:54.763682] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:23.763 [2024-11-20 16:15:54.835219] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:23.763 [2024-11-20 16:15:54.848256] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:23.763 passed 00:14:23.763 Test: fabric_property_get ...[2024-11-20 16:15:54.920994] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:23.763 [2024-11-20 16:15:54.922240] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:23.763 [2024-11-20 16:15:54.926020] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:23.763 passed 00:14:24.021 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 16:15:55.001520] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.021 [2024-11-20 16:15:55.002747] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:24.021 [2024-11-20 16:15:55.006555] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.021 passed 00:14:24.021 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 16:15:55.081500] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.021 [2024-11-20 16:15:55.169213] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:24.021 [2024-11-20 16:15:55.184214] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:24.021 [2024-11-20 16:15:55.189284] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.021 passed 00:14:24.279 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 16:15:55.263131] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.279 [2024-11-20 16:15:55.264363] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:24.279 [2024-11-20 16:15:55.266151] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.279 passed 00:14:24.279 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 16:15:55.342903] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.279 [2024-11-20 16:15:55.419211] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:24.279 [2024-11-20 16:15:55.443207] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:24.279 [2024-11-20 16:15:55.448291] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.279 passed 00:14:24.538 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 16:15:55.523987] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.538 [2024-11-20 16:15:55.525223] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:24.538 [2024-11-20 16:15:55.525245] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:24.538 [2024-11-20 16:15:55.527011] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.538 passed 00:14:24.538 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 16:15:55.604708] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.538 [2024-11-20 16:15:55.697208] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:24.538 [2024-11-20 16:15:55.705210] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:24.538 [2024-11-20 16:15:55.713215] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:24.538 [2024-11-20 16:15:55.721211] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:24.538 [2024-11-20 16:15:55.750283] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.796 passed 00:14:24.796 Test: admin_create_io_sq_verify_pc ...[2024-11-20 16:15:55.824161] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.796 [2024-11-20 16:15:55.840220] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:24.796 [2024-11-20 16:15:55.857350] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.796 passed 00:14:24.796 Test: admin_create_io_qp_max_qps ...[2024-11-20 16:15:55.935881] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:26.172 [2024-11-20 16:15:57.043213] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:26.430 [2024-11-20 16:15:57.423960] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:26.430 passed 00:14:26.430 Test: admin_create_io_sq_shared_cq ...[2024-11-20 16:15:57.500830] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:26.430 [2024-11-20 16:15:57.632211] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:26.689 [2024-11-20 16:15:57.669264] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:26.689 passed 00:14:26.689 00:14:26.689 Run Summary: Type Total Ran Passed Failed Inactive 00:14:26.689 suites 1 1 n/a 0 0 00:14:26.689 tests 18 18 18 0 0 00:14:26.689 asserts 360 360 360 0 n/a 00:14:26.689 00:14:26.689 Elapsed time = 1.502 seconds 00:14:26.689 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1891235 00:14:26.689 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1891235 ']' 00:14:26.689 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1891235 00:14:26.689 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:26.689 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.689 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1891235 00:14:26.689 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.689 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.689 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1891235' 00:14:26.689 killing process with pid 1891235 00:14:26.689 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1891235 00:14:26.689 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1891235 00:14:26.948 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:26.948 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:26.948 00:14:26.948 real 0m5.634s 00:14:26.948 user 0m15.774s 00:14:26.948 sys 0m0.535s 00:14:26.948 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.948 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:26.948 ************************************ 00:14:26.948 END TEST nvmf_vfio_user_nvme_compliance 00:14:26.948 ************************************ 00:14:26.948 16:15:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:26.948 16:15:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:26.948 16:15:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.948 16:15:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:26.948 ************************************ 00:14:26.948 START TEST nvmf_vfio_user_fuzz 00:14:26.948 ************************************ 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:26.948 * Looking for test storage... 00:14:26.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:26.948 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:27.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.209 --rc genhtml_branch_coverage=1 00:14:27.209 --rc genhtml_function_coverage=1 00:14:27.209 --rc genhtml_legend=1 00:14:27.209 --rc geninfo_all_blocks=1 00:14:27.209 --rc geninfo_unexecuted_blocks=1 00:14:27.209 00:14:27.209 ' 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:27.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.209 --rc genhtml_branch_coverage=1 00:14:27.209 --rc genhtml_function_coverage=1 00:14:27.209 --rc genhtml_legend=1 00:14:27.209 --rc geninfo_all_blocks=1 00:14:27.209 --rc geninfo_unexecuted_blocks=1 00:14:27.209 00:14:27.209 ' 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:27.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.209 --rc genhtml_branch_coverage=1 00:14:27.209 --rc genhtml_function_coverage=1 00:14:27.209 --rc genhtml_legend=1 00:14:27.209 --rc geninfo_all_blocks=1 00:14:27.209 --rc geninfo_unexecuted_blocks=1 00:14:27.209 00:14:27.209 ' 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:27.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.209 --rc genhtml_branch_coverage=1 00:14:27.209 --rc genhtml_function_coverage=1 00:14:27.209 --rc genhtml_legend=1 00:14:27.209 --rc geninfo_all_blocks=1 00:14:27.209 --rc geninfo_unexecuted_blocks=1 00:14:27.209 00:14:27.209 ' 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:27.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1892215 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1892215' 00:14:27.209 Process pid: 1892215 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:27.209 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1892215 00:14:27.210 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1892215 ']' 00:14:27.210 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.210 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.210 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.210 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.210 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:27.468 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.468 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:27.468 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:28.404 malloc0 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:28.404 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:00.477 Fuzzing completed. Shutting down the fuzz application 00:15:00.477 00:15:00.477 Dumping successful admin opcodes: 00:15:00.477 8, 9, 10, 24, 00:15:00.477 Dumping successful io opcodes: 00:15:00.477 0, 00:15:00.477 NS: 0x20000081ef00 I/O qp, Total commands completed: 1142156, total successful commands: 4502, random_seed: 1333267776 00:15:00.477 NS: 0x20000081ef00 admin qp, Total commands completed: 284146, total successful commands: 2293, random_seed: 2363629504 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1892215 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1892215 ']' 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1892215 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1892215 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1892215' 00:15:00.477 killing process with pid 1892215 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1892215 00:15:00.477 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1892215 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:00.477 00:15:00.477 real 0m32.231s 00:15:00.477 user 0m34.211s 00:15:00.477 sys 0m27.178s 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:00.477 ************************************ 00:15:00.477 END TEST nvmf_vfio_user_fuzz 00:15:00.477 ************************************ 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:00.477 ************************************ 00:15:00.477 START TEST nvmf_auth_target 00:15:00.477 ************************************ 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:00.477 * Looking for test storage... 00:15:00.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.477 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:00.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.478 --rc genhtml_branch_coverage=1 00:15:00.478 --rc genhtml_function_coverage=1 00:15:00.478 --rc genhtml_legend=1 00:15:00.478 --rc geninfo_all_blocks=1 00:15:00.478 --rc geninfo_unexecuted_blocks=1 00:15:00.478 00:15:00.478 ' 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:00.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.478 --rc genhtml_branch_coverage=1 00:15:00.478 --rc genhtml_function_coverage=1 00:15:00.478 --rc genhtml_legend=1 00:15:00.478 --rc geninfo_all_blocks=1 00:15:00.478 --rc geninfo_unexecuted_blocks=1 00:15:00.478 00:15:00.478 ' 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:00.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.478 --rc genhtml_branch_coverage=1 00:15:00.478 --rc genhtml_function_coverage=1 00:15:00.478 --rc genhtml_legend=1 00:15:00.478 --rc geninfo_all_blocks=1 00:15:00.478 --rc geninfo_unexecuted_blocks=1 00:15:00.478 00:15:00.478 ' 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:00.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.478 --rc genhtml_branch_coverage=1 00:15:00.478 --rc genhtml_function_coverage=1 00:15:00.478 --rc genhtml_legend=1 00:15:00.478 --rc geninfo_all_blocks=1 00:15:00.478 --rc geninfo_unexecuted_blocks=1 00:15:00.478 00:15:00.478 ' 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:00.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:00.478 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:05.854 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:05.854 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:05.854 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:05.855 Found net devices under 0000:86:00.0: cvl_0_0 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:05.855 Found net devices under 0000:86:00.1: cvl_0_1 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:05.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:15:05.855 00:15:05.855 --- 10.0.0.2 ping statistics --- 00:15:05.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.855 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:15:05.855 00:15:05.855 --- 10.0.0.1 ping statistics --- 00:15:05.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.855 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1900579 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1900579 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1900579 ']' 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.855 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1900765 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c1a86b2c3a31d7f6de962e2db6c52cd0c9d8874b86fb2215 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Uwm 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c1a86b2c3a31d7f6de962e2db6c52cd0c9d8874b86fb2215 0 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c1a86b2c3a31d7f6de962e2db6c52cd0c9d8874b86fb2215 0 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c1a86b2c3a31d7f6de962e2db6c52cd0c9d8874b86fb2215 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Uwm 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Uwm 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Uwm 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e48092084fe538253adf1d3c494d6bd17f0e9fdf7459c6b2d74f7ca28ef30b31 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.OdU 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e48092084fe538253adf1d3c494d6bd17f0e9fdf7459c6b2d74f7ca28ef30b31 3 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e48092084fe538253adf1d3c494d6bd17f0e9fdf7459c6b2d74f7ca28ef30b31 3 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e48092084fe538253adf1d3c494d6bd17f0e9fdf7459c6b2d74f7ca28ef30b31 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.OdU 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.OdU 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.OdU 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f79c3fa0b547582ecab5291d0e29998c 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qsb 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f79c3fa0b547582ecab5291d0e29998c 1 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f79c3fa0b547582ecab5291d0e29998c 1 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f79c3fa0b547582ecab5291d0e29998c 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qsb 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qsb 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.qsb 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:05.856 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b61e7376f8a461c6bcae88fb0e7a801d4b5b8e32480f3fcf 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Map 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b61e7376f8a461c6bcae88fb0e7a801d4b5b8e32480f3fcf 2 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b61e7376f8a461c6bcae88fb0e7a801d4b5b8e32480f3fcf 2 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b61e7376f8a461c6bcae88fb0e7a801d4b5b8e32480f3fcf 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Map 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Map 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Map 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:05.856 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:05.857 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b48182d57faa03ce3c6ac2be15fd3821b1d876ab1612137c 00:15:05.857 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:05.857 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.l4b 00:15:05.857 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b48182d57faa03ce3c6ac2be15fd3821b1d876ab1612137c 2 00:15:05.857 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b48182d57faa03ce3c6ac2be15fd3821b1d876ab1612137c 2 00:15:05.857 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:05.857 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:05.857 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b48182d57faa03ce3c6ac2be15fd3821b1d876ab1612137c 00:15:05.857 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:05.857 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.l4b 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.l4b 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.l4b 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bfc29e0dbc6bccf0b64415cc79166603 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mj4 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bfc29e0dbc6bccf0b64415cc79166603 1 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bfc29e0dbc6bccf0b64415cc79166603 1 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bfc29e0dbc6bccf0b64415cc79166603 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mj4 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mj4 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.mj4 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a69211b418d456e7e9b6a44b2d1223aca27bfd14e7f0972a8b371de5595a8dee 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.law 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a69211b418d456e7e9b6a44b2d1223aca27bfd14e7f0972a8b371de5595a8dee 3 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a69211b418d456e7e9b6a44b2d1223aca27bfd14e7f0972a8b371de5595a8dee 3 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a69211b418d456e7e9b6a44b2d1223aca27bfd14e7f0972a8b371de5595a8dee 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.law 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.law 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.law 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1900579 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1900579 ']' 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.116 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.374 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.374 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:06.374 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1900765 /var/tmp/host.sock 00:15:06.374 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1900765 ']' 00:15:06.374 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:06.374 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.374 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:06.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:06.374 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.374 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.632 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.632 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:06.632 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:06.632 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.632 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.632 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.633 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:06.633 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Uwm 00:15:06.633 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.633 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.633 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.633 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Uwm 00:15:06.633 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Uwm 00:15:06.633 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.OdU ]] 00:15:06.633 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OdU 00:15:06.633 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.633 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.891 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.892 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OdU 00:15:06.892 16:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OdU 00:15:06.892 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:06.892 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.qsb 00:15:06.892 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.892 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.892 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.892 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.qsb 00:15:06.892 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.qsb 00:15:07.150 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Map ]] 00:15:07.150 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Map 00:15:07.150 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.151 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.151 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.151 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Map 00:15:07.151 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Map 00:15:07.409 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:07.409 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.l4b 00:15:07.409 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.409 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.409 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.409 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.l4b 00:15:07.409 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.l4b 00:15:07.667 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.mj4 ]] 00:15:07.667 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mj4 00:15:07.667 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.667 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.667 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.667 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mj4 00:15:07.667 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mj4 00:15:07.925 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:07.925 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.law 00:15:07.925 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.925 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.925 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.925 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.law 00:15:07.925 16:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.law 00:15:07.925 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:07.925 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:07.925 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.925 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.925 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:07.925 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:08.185 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:08.185 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.185 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.185 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:08.185 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.185 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.185 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.185 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.185 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.185 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.185 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.185 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.185 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.448 00:15:08.448 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.448 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.448 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.707 { 00:15:08.707 "cntlid": 1, 00:15:08.707 "qid": 0, 00:15:08.707 "state": "enabled", 00:15:08.707 "thread": "nvmf_tgt_poll_group_000", 00:15:08.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:08.707 "listen_address": { 00:15:08.707 "trtype": "TCP", 00:15:08.707 "adrfam": "IPv4", 00:15:08.707 "traddr": "10.0.0.2", 00:15:08.707 "trsvcid": "4420" 00:15:08.707 }, 00:15:08.707 "peer_address": { 00:15:08.707 "trtype": "TCP", 00:15:08.707 "adrfam": "IPv4", 00:15:08.707 "traddr": "10.0.0.1", 00:15:08.707 "trsvcid": "32928" 00:15:08.707 }, 00:15:08.707 "auth": { 00:15:08.707 "state": "completed", 00:15:08.707 "digest": "sha256", 00:15:08.707 "dhgroup": "null" 00:15:08.707 } 00:15:08.707 } 00:15:08.707 ]' 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.707 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.967 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:08.967 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:09.535 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.535 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:09.535 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.535 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.535 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.535 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.535 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:09.535 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:09.793 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:09.793 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.793 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:09.793 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:09.793 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:09.793 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.793 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.793 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.793 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.793 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.793 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.793 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.793 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.052 00:15:10.052 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.052 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.052 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.311 { 00:15:10.311 "cntlid": 3, 00:15:10.311 "qid": 0, 00:15:10.311 "state": "enabled", 00:15:10.311 "thread": "nvmf_tgt_poll_group_000", 00:15:10.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:10.311 "listen_address": { 00:15:10.311 "trtype": "TCP", 00:15:10.311 "adrfam": "IPv4", 00:15:10.311 "traddr": "10.0.0.2", 00:15:10.311 "trsvcid": "4420" 00:15:10.311 }, 00:15:10.311 "peer_address": { 00:15:10.311 "trtype": "TCP", 00:15:10.311 "adrfam": "IPv4", 00:15:10.311 "traddr": "10.0.0.1", 00:15:10.311 "trsvcid": "41242" 00:15:10.311 }, 00:15:10.311 "auth": { 00:15:10.311 "state": "completed", 00:15:10.311 "digest": "sha256", 00:15:10.311 "dhgroup": "null" 00:15:10.311 } 00:15:10.311 } 00:15:10.311 ]' 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.311 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.570 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:10.570 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:11.137 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.137 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:11.137 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.137 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.137 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.137 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.137 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:11.137 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:11.396 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:11.396 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.396 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.396 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:11.396 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:11.396 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.396 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.396 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.396 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.396 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.396 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.396 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.396 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.655 00:15:11.655 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.655 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.655 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.655 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.655 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.655 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.655 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.655 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.655 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.655 { 00:15:11.655 "cntlid": 5, 00:15:11.655 "qid": 0, 00:15:11.655 "state": "enabled", 00:15:11.655 "thread": "nvmf_tgt_poll_group_000", 00:15:11.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:11.655 "listen_address": { 00:15:11.655 "trtype": "TCP", 00:15:11.655 "adrfam": "IPv4", 00:15:11.655 "traddr": "10.0.0.2", 00:15:11.655 "trsvcid": "4420" 00:15:11.655 }, 00:15:11.655 "peer_address": { 00:15:11.655 "trtype": "TCP", 00:15:11.655 "adrfam": "IPv4", 00:15:11.655 "traddr": "10.0.0.1", 00:15:11.655 "trsvcid": "41270" 00:15:11.655 }, 00:15:11.655 "auth": { 00:15:11.655 "state": "completed", 00:15:11.655 "digest": "sha256", 00:15:11.655 "dhgroup": "null" 00:15:11.655 } 00:15:11.655 } 00:15:11.655 ]' 00:15:11.655 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.914 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.914 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.914 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:11.914 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.914 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.914 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.914 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.172 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:12.172 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.740 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.030 00:15:13.030 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.030 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.030 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.289 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.289 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.289 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.289 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.289 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.289 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.289 { 00:15:13.289 "cntlid": 7, 00:15:13.289 "qid": 0, 00:15:13.289 "state": "enabled", 00:15:13.289 "thread": "nvmf_tgt_poll_group_000", 00:15:13.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:13.289 "listen_address": { 00:15:13.289 "trtype": "TCP", 00:15:13.289 "adrfam": "IPv4", 00:15:13.289 "traddr": "10.0.0.2", 00:15:13.289 "trsvcid": "4420" 00:15:13.289 }, 00:15:13.289 "peer_address": { 00:15:13.289 "trtype": "TCP", 00:15:13.289 "adrfam": "IPv4", 00:15:13.289 "traddr": "10.0.0.1", 00:15:13.289 "trsvcid": "41290" 00:15:13.289 }, 00:15:13.289 "auth": { 00:15:13.289 "state": "completed", 00:15:13.289 "digest": "sha256", 00:15:13.289 "dhgroup": "null" 00:15:13.289 } 00:15:13.289 } 00:15:13.289 ]' 00:15:13.289 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.289 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.289 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.548 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:13.548 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.548 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.548 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.548 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.548 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:13.548 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:14.115 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.115 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:14.115 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.115 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.115 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.115 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.115 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.115 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:14.115 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:14.373 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:14.373 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.373 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.374 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:14.374 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:14.374 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.374 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.374 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.374 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.374 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.374 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.374 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.374 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.632 00:15:14.633 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.633 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.633 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.891 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.891 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.891 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.891 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.891 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.891 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.891 { 00:15:14.891 "cntlid": 9, 00:15:14.891 "qid": 0, 00:15:14.891 "state": "enabled", 00:15:14.891 "thread": "nvmf_tgt_poll_group_000", 00:15:14.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:14.891 "listen_address": { 00:15:14.891 "trtype": "TCP", 00:15:14.891 "adrfam": "IPv4", 00:15:14.891 "traddr": "10.0.0.2", 00:15:14.892 "trsvcid": "4420" 00:15:14.892 }, 00:15:14.892 "peer_address": { 00:15:14.892 "trtype": "TCP", 00:15:14.892 "adrfam": "IPv4", 00:15:14.892 "traddr": "10.0.0.1", 00:15:14.892 "trsvcid": "41336" 00:15:14.892 }, 00:15:14.892 "auth": { 00:15:14.892 "state": "completed", 00:15:14.892 "digest": "sha256", 00:15:14.892 "dhgroup": "ffdhe2048" 00:15:14.892 } 00:15:14.892 } 00:15:14.892 ]' 00:15:14.892 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.892 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.892 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.892 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:14.892 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.892 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.892 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.892 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.151 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:15.151 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:15.718 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.718 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:15.718 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.718 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.718 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.718 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.718 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:15.718 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:15.976 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:15.976 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.976 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.976 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:15.976 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:15.976 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.976 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.976 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.976 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.976 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.976 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.976 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.976 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.235 00:15:16.235 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.235 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.235 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.494 { 00:15:16.494 "cntlid": 11, 00:15:16.494 "qid": 0, 00:15:16.494 "state": "enabled", 00:15:16.494 "thread": "nvmf_tgt_poll_group_000", 00:15:16.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:16.494 "listen_address": { 00:15:16.494 "trtype": "TCP", 00:15:16.494 "adrfam": "IPv4", 00:15:16.494 "traddr": "10.0.0.2", 00:15:16.494 "trsvcid": "4420" 00:15:16.494 }, 00:15:16.494 "peer_address": { 00:15:16.494 "trtype": "TCP", 00:15:16.494 "adrfam": "IPv4", 00:15:16.494 "traddr": "10.0.0.1", 00:15:16.494 "trsvcid": "41358" 00:15:16.494 }, 00:15:16.494 "auth": { 00:15:16.494 "state": "completed", 00:15:16.494 "digest": "sha256", 00:15:16.494 "dhgroup": "ffdhe2048" 00:15:16.494 } 00:15:16.494 } 00:15:16.494 ]' 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.494 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.752 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:16.752 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:17.320 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.320 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:17.320 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.320 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.320 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.320 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.320 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:17.320 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:17.578 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:17.578 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.578 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.578 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:17.578 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:17.578 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.578 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.578 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.578 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.578 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.578 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.578 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.578 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.837 00:15:17.837 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.837 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.837 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.096 { 00:15:18.096 "cntlid": 13, 00:15:18.096 "qid": 0, 00:15:18.096 "state": "enabled", 00:15:18.096 "thread": "nvmf_tgt_poll_group_000", 00:15:18.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:18.096 "listen_address": { 00:15:18.096 "trtype": "TCP", 00:15:18.096 "adrfam": "IPv4", 00:15:18.096 "traddr": "10.0.0.2", 00:15:18.096 "trsvcid": "4420" 00:15:18.096 }, 00:15:18.096 "peer_address": { 00:15:18.096 "trtype": "TCP", 00:15:18.096 "adrfam": "IPv4", 00:15:18.096 "traddr": "10.0.0.1", 00:15:18.096 "trsvcid": "41382" 00:15:18.096 }, 00:15:18.096 "auth": { 00:15:18.096 "state": "completed", 00:15:18.096 "digest": "sha256", 00:15:18.096 "dhgroup": "ffdhe2048" 00:15:18.096 } 00:15:18.096 } 00:15:18.096 ]' 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.096 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.356 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:18.356 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:18.922 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.922 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:18.922 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.922 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.922 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.922 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.922 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:18.922 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:19.181 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:19.181 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.181 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.181 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:19.181 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:19.181 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.181 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:19.181 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.181 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.181 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.181 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.181 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.181 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.439 00:15:19.439 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.439 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.439 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.699 { 00:15:19.699 "cntlid": 15, 00:15:19.699 "qid": 0, 00:15:19.699 "state": "enabled", 00:15:19.699 "thread": "nvmf_tgt_poll_group_000", 00:15:19.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:19.699 "listen_address": { 00:15:19.699 "trtype": "TCP", 00:15:19.699 "adrfam": "IPv4", 00:15:19.699 "traddr": "10.0.0.2", 00:15:19.699 "trsvcid": "4420" 00:15:19.699 }, 00:15:19.699 "peer_address": { 00:15:19.699 "trtype": "TCP", 00:15:19.699 "adrfam": "IPv4", 00:15:19.699 "traddr": "10.0.0.1", 00:15:19.699 "trsvcid": "40228" 00:15:19.699 }, 00:15:19.699 "auth": { 00:15:19.699 "state": "completed", 00:15:19.699 "digest": "sha256", 00:15:19.699 "dhgroup": "ffdhe2048" 00:15:19.699 } 00:15:19.699 } 00:15:19.699 ]' 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.699 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.959 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:19.959 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:20.526 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.526 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:20.526 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.526 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.526 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.526 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.526 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.526 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:20.526 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:20.785 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:20.785 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.785 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.785 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:20.785 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:20.785 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.785 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.785 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.785 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.785 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.785 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.785 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.785 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.044 00:15:21.044 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.044 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.044 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.303 { 00:15:21.303 "cntlid": 17, 00:15:21.303 "qid": 0, 00:15:21.303 "state": "enabled", 00:15:21.303 "thread": "nvmf_tgt_poll_group_000", 00:15:21.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:21.303 "listen_address": { 00:15:21.303 "trtype": "TCP", 00:15:21.303 "adrfam": "IPv4", 00:15:21.303 "traddr": "10.0.0.2", 00:15:21.303 "trsvcid": "4420" 00:15:21.303 }, 00:15:21.303 "peer_address": { 00:15:21.303 "trtype": "TCP", 00:15:21.303 "adrfam": "IPv4", 00:15:21.303 "traddr": "10.0.0.1", 00:15:21.303 "trsvcid": "40250" 00:15:21.303 }, 00:15:21.303 "auth": { 00:15:21.303 "state": "completed", 00:15:21.303 "digest": "sha256", 00:15:21.303 "dhgroup": "ffdhe3072" 00:15:21.303 } 00:15:21.303 } 00:15:21.303 ]' 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.303 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.562 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:21.562 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:22.130 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.130 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:22.130 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.130 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.130 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.130 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.130 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:22.130 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:22.389 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:22.389 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.389 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.389 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:22.389 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:22.389 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.389 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.389 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.389 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.389 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.389 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.389 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.389 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.647 00:15:22.647 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.647 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.647 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.905 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.905 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.905 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.905 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.905 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.905 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.905 { 00:15:22.905 "cntlid": 19, 00:15:22.905 "qid": 0, 00:15:22.905 "state": "enabled", 00:15:22.905 "thread": "nvmf_tgt_poll_group_000", 00:15:22.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:22.905 "listen_address": { 00:15:22.905 "trtype": "TCP", 00:15:22.905 "adrfam": "IPv4", 00:15:22.905 "traddr": "10.0.0.2", 00:15:22.905 "trsvcid": "4420" 00:15:22.905 }, 00:15:22.905 "peer_address": { 00:15:22.905 "trtype": "TCP", 00:15:22.905 "adrfam": "IPv4", 00:15:22.905 "traddr": "10.0.0.1", 00:15:22.905 "trsvcid": "40278" 00:15:22.905 }, 00:15:22.905 "auth": { 00:15:22.905 "state": "completed", 00:15:22.905 "digest": "sha256", 00:15:22.905 "dhgroup": "ffdhe3072" 00:15:22.905 } 00:15:22.905 } 00:15:22.905 ]' 00:15:22.905 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.905 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.905 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.905 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.905 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.905 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.905 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.905 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.163 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:23.163 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:23.730 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.730 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:23.730 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.730 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.730 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.730 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.730 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:23.730 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:23.989 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:23.989 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.989 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.989 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:23.989 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:23.989 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.989 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.989 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.989 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.989 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.989 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.989 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.989 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.247 00:15:24.247 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.247 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.247 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.247 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.247 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.247 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.247 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.247 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.247 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.247 { 00:15:24.247 "cntlid": 21, 00:15:24.247 "qid": 0, 00:15:24.247 "state": "enabled", 00:15:24.247 "thread": "nvmf_tgt_poll_group_000", 00:15:24.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:24.247 "listen_address": { 00:15:24.247 "trtype": "TCP", 00:15:24.247 "adrfam": "IPv4", 00:15:24.247 "traddr": "10.0.0.2", 00:15:24.247 "trsvcid": "4420" 00:15:24.247 }, 00:15:24.247 "peer_address": { 00:15:24.247 "trtype": "TCP", 00:15:24.247 "adrfam": "IPv4", 00:15:24.247 "traddr": "10.0.0.1", 00:15:24.247 "trsvcid": "40302" 00:15:24.247 }, 00:15:24.247 "auth": { 00:15:24.247 "state": "completed", 00:15:24.247 "digest": "sha256", 00:15:24.247 "dhgroup": "ffdhe3072" 00:15:24.247 } 00:15:24.247 } 00:15:24.247 ]' 00:15:24.247 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.505 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.505 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.505 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.505 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.505 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.505 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.505 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.764 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:24.764 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:25.330 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.330 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:25.330 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.330 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.330 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.330 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.330 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:25.330 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:25.588 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:25.588 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.588 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.588 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:25.588 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:25.588 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.588 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:25.588 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.588 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.588 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.588 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:25.588 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.588 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.846 00:15:25.846 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.846 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.846 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.846 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.846 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.846 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.846 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.846 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.846 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.846 { 00:15:25.846 "cntlid": 23, 00:15:25.846 "qid": 0, 00:15:25.846 "state": "enabled", 00:15:25.846 "thread": "nvmf_tgt_poll_group_000", 00:15:25.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:25.846 "listen_address": { 00:15:25.846 "trtype": "TCP", 00:15:25.846 "adrfam": "IPv4", 00:15:25.846 "traddr": "10.0.0.2", 00:15:25.846 "trsvcid": "4420" 00:15:25.846 }, 00:15:25.846 "peer_address": { 00:15:25.846 "trtype": "TCP", 00:15:25.846 "adrfam": "IPv4", 00:15:25.846 "traddr": "10.0.0.1", 00:15:25.846 "trsvcid": "40328" 00:15:25.846 }, 00:15:25.846 "auth": { 00:15:25.846 "state": "completed", 00:15:25.846 "digest": "sha256", 00:15:25.846 "dhgroup": "ffdhe3072" 00:15:25.846 } 00:15:25.846 } 00:15:25.846 ]' 00:15:25.846 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.104 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.104 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.104 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:26.104 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.104 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.104 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.104 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.363 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:26.363 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:26.928 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.928 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:26.928 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.928 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.928 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.928 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.928 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.928 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.928 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.928 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:26.928 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.928 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.928 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:26.928 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:26.928 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.928 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.928 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.928 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.928 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.928 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.928 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.928 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.187 00:15:27.187 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.187 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.187 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.443 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.443 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.443 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.443 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.443 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.443 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.443 { 00:15:27.443 "cntlid": 25, 00:15:27.443 "qid": 0, 00:15:27.443 "state": "enabled", 00:15:27.443 "thread": "nvmf_tgt_poll_group_000", 00:15:27.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:27.443 "listen_address": { 00:15:27.443 "trtype": "TCP", 00:15:27.443 "adrfam": "IPv4", 00:15:27.443 "traddr": "10.0.0.2", 00:15:27.443 "trsvcid": "4420" 00:15:27.443 }, 00:15:27.443 "peer_address": { 00:15:27.443 "trtype": "TCP", 00:15:27.443 "adrfam": "IPv4", 00:15:27.443 "traddr": "10.0.0.1", 00:15:27.443 "trsvcid": "40350" 00:15:27.443 }, 00:15:27.443 "auth": { 00:15:27.443 "state": "completed", 00:15:27.443 "digest": "sha256", 00:15:27.443 "dhgroup": "ffdhe4096" 00:15:27.443 } 00:15:27.443 } 00:15:27.443 ]' 00:15:27.443 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.443 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.443 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.701 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:27.701 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.701 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.701 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.701 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.958 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:27.959 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.523 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.781 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.782 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.782 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.782 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.040 00:15:29.040 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.040 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.040 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.040 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.040 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.040 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.040 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.040 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.040 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.040 { 00:15:29.040 "cntlid": 27, 00:15:29.040 "qid": 0, 00:15:29.040 "state": "enabled", 00:15:29.040 "thread": "nvmf_tgt_poll_group_000", 00:15:29.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:29.040 "listen_address": { 00:15:29.040 "trtype": "TCP", 00:15:29.040 "adrfam": "IPv4", 00:15:29.040 "traddr": "10.0.0.2", 00:15:29.040 "trsvcid": "4420" 00:15:29.040 }, 00:15:29.040 "peer_address": { 00:15:29.040 "trtype": "TCP", 00:15:29.040 "adrfam": "IPv4", 00:15:29.040 "traddr": "10.0.0.1", 00:15:29.041 "trsvcid": "43334" 00:15:29.041 }, 00:15:29.041 "auth": { 00:15:29.041 "state": "completed", 00:15:29.041 "digest": "sha256", 00:15:29.041 "dhgroup": "ffdhe4096" 00:15:29.041 } 00:15:29.041 } 00:15:29.041 ]' 00:15:29.041 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.299 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.299 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.299 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:29.299 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.299 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.299 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.299 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.557 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:29.557 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:30.123 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.123 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:30.123 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.123 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.123 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.123 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.123 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:30.123 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:30.383 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:30.383 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.383 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.383 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:30.383 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.383 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.383 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.383 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.383 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.383 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.383 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.383 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.383 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.642 00:15:30.642 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.642 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.642 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.642 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.642 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.642 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.642 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.642 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.642 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.642 { 00:15:30.642 "cntlid": 29, 00:15:30.642 "qid": 0, 00:15:30.642 "state": "enabled", 00:15:30.642 "thread": "nvmf_tgt_poll_group_000", 00:15:30.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:30.642 "listen_address": { 00:15:30.642 "trtype": "TCP", 00:15:30.642 "adrfam": "IPv4", 00:15:30.642 "traddr": "10.0.0.2", 00:15:30.642 "trsvcid": "4420" 00:15:30.642 }, 00:15:30.642 "peer_address": { 00:15:30.642 "trtype": "TCP", 00:15:30.642 "adrfam": "IPv4", 00:15:30.642 "traddr": "10.0.0.1", 00:15:30.642 "trsvcid": "43370" 00:15:30.642 }, 00:15:30.642 "auth": { 00:15:30.642 "state": "completed", 00:15:30.642 "digest": "sha256", 00:15:30.642 "dhgroup": "ffdhe4096" 00:15:30.642 } 00:15:30.642 } 00:15:30.642 ]' 00:15:30.642 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.901 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.901 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.901 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:30.901 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.901 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.901 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.902 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.160 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:31.160 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.727 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.986 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.986 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.986 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.986 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.245 00:15:32.245 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.245 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.245 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.245 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.245 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.245 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.245 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.245 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.245 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.245 { 00:15:32.245 "cntlid": 31, 00:15:32.245 "qid": 0, 00:15:32.245 "state": "enabled", 00:15:32.245 "thread": "nvmf_tgt_poll_group_000", 00:15:32.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:32.245 "listen_address": { 00:15:32.245 "trtype": "TCP", 00:15:32.245 "adrfam": "IPv4", 00:15:32.245 "traddr": "10.0.0.2", 00:15:32.245 "trsvcid": "4420" 00:15:32.245 }, 00:15:32.245 "peer_address": { 00:15:32.245 "trtype": "TCP", 00:15:32.245 "adrfam": "IPv4", 00:15:32.245 "traddr": "10.0.0.1", 00:15:32.245 "trsvcid": "43408" 00:15:32.245 }, 00:15:32.245 "auth": { 00:15:32.245 "state": "completed", 00:15:32.245 "digest": "sha256", 00:15:32.245 "dhgroup": "ffdhe4096" 00:15:32.245 } 00:15:32.245 } 00:15:32.245 ]' 00:15:32.245 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.504 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.504 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.504 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:32.504 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.504 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.504 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.504 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.763 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:32.763 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.331 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.899 00:15:33.899 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.899 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.899 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.899 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.899 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.899 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.899 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.899 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.899 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.899 { 00:15:33.899 "cntlid": 33, 00:15:33.899 "qid": 0, 00:15:33.899 "state": "enabled", 00:15:33.899 "thread": "nvmf_tgt_poll_group_000", 00:15:33.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:33.899 "listen_address": { 00:15:33.899 "trtype": "TCP", 00:15:33.899 "adrfam": "IPv4", 00:15:33.899 "traddr": "10.0.0.2", 00:15:33.899 "trsvcid": "4420" 00:15:33.899 }, 00:15:33.899 "peer_address": { 00:15:33.899 "trtype": "TCP", 00:15:33.899 "adrfam": "IPv4", 00:15:33.899 "traddr": "10.0.0.1", 00:15:33.899 "trsvcid": "43444" 00:15:33.899 }, 00:15:33.899 "auth": { 00:15:33.899 "state": "completed", 00:15:33.899 "digest": "sha256", 00:15:33.899 "dhgroup": "ffdhe6144" 00:15:33.899 } 00:15:33.899 } 00:15:33.899 ]' 00:15:33.899 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.158 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.158 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.158 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.158 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.158 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.158 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.159 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.418 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:34.418 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:34.986 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.986 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:34.986 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.986 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.986 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.986 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.986 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:34.986 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:34.986 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:34.986 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.986 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.986 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:34.986 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:34.986 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.986 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.986 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.986 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.986 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.986 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.986 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.986 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.554 00:15:35.554 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.554 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.554 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.554 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.554 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.554 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.554 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.554 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.554 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.554 { 00:15:35.554 "cntlid": 35, 00:15:35.554 "qid": 0, 00:15:35.554 "state": "enabled", 00:15:35.554 "thread": "nvmf_tgt_poll_group_000", 00:15:35.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:35.554 "listen_address": { 00:15:35.554 "trtype": "TCP", 00:15:35.554 "adrfam": "IPv4", 00:15:35.554 "traddr": "10.0.0.2", 00:15:35.554 "trsvcid": "4420" 00:15:35.554 }, 00:15:35.554 "peer_address": { 00:15:35.554 "trtype": "TCP", 00:15:35.554 "adrfam": "IPv4", 00:15:35.554 "traddr": "10.0.0.1", 00:15:35.554 "trsvcid": "43474" 00:15:35.554 }, 00:15:35.554 "auth": { 00:15:35.554 "state": "completed", 00:15:35.554 "digest": "sha256", 00:15:35.554 "dhgroup": "ffdhe6144" 00:15:35.554 } 00:15:35.554 } 00:15:35.554 ]' 00:15:35.554 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.554 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.554 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.813 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.813 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.813 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.813 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.813 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.813 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:35.813 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:36.381 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.641 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.209 00:15:37.209 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.209 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.209 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.209 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.209 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.209 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.209 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.209 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.209 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.209 { 00:15:37.209 "cntlid": 37, 00:15:37.209 "qid": 0, 00:15:37.209 "state": "enabled", 00:15:37.209 "thread": "nvmf_tgt_poll_group_000", 00:15:37.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:37.209 "listen_address": { 00:15:37.209 "trtype": "TCP", 00:15:37.209 "adrfam": "IPv4", 00:15:37.209 "traddr": "10.0.0.2", 00:15:37.209 "trsvcid": "4420" 00:15:37.209 }, 00:15:37.209 "peer_address": { 00:15:37.209 "trtype": "TCP", 00:15:37.209 "adrfam": "IPv4", 00:15:37.209 "traddr": "10.0.0.1", 00:15:37.209 "trsvcid": "43506" 00:15:37.209 }, 00:15:37.209 "auth": { 00:15:37.209 "state": "completed", 00:15:37.209 "digest": "sha256", 00:15:37.209 "dhgroup": "ffdhe6144" 00:15:37.209 } 00:15:37.209 } 00:15:37.209 ]' 00:15:37.209 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.209 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.209 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.468 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.468 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.468 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.468 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.468 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.727 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:37.727 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.296 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.864 00:15:38.864 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.864 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.864 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.864 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.864 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.864 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.864 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.864 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.864 { 00:15:38.864 "cntlid": 39, 00:15:38.864 "qid": 0, 00:15:38.864 "state": "enabled", 00:15:38.864 "thread": "nvmf_tgt_poll_group_000", 00:15:38.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:38.864 "listen_address": { 00:15:38.864 "trtype": "TCP", 00:15:38.864 "adrfam": "IPv4", 00:15:38.864 "traddr": "10.0.0.2", 00:15:38.864 "trsvcid": "4420" 00:15:38.864 }, 00:15:38.864 "peer_address": { 00:15:38.864 "trtype": "TCP", 00:15:38.864 "adrfam": "IPv4", 00:15:38.864 "traddr": "10.0.0.1", 00:15:38.864 "trsvcid": "43536" 00:15:38.864 }, 00:15:38.864 "auth": { 00:15:38.864 "state": "completed", 00:15:38.864 "digest": "sha256", 00:15:38.864 "dhgroup": "ffdhe6144" 00:15:38.864 } 00:15:38.864 } 00:15:38.864 ]' 00:15:38.864 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.864 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.864 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.122 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:39.122 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.122 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.122 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.122 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.383 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:39.383 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:39.999 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.999 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:39.999 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.999 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.999 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.999 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.999 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.999 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.999 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.999 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:39.999 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.999 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.999 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:39.999 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:39.999 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.999 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.999 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.999 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.999 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.999 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.999 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.999 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.601 00:15:40.601 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.601 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.601 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.861 { 00:15:40.861 "cntlid": 41, 00:15:40.861 "qid": 0, 00:15:40.861 "state": "enabled", 00:15:40.861 "thread": "nvmf_tgt_poll_group_000", 00:15:40.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:40.861 "listen_address": { 00:15:40.861 "trtype": "TCP", 00:15:40.861 "adrfam": "IPv4", 00:15:40.861 "traddr": "10.0.0.2", 00:15:40.861 "trsvcid": "4420" 00:15:40.861 }, 00:15:40.861 "peer_address": { 00:15:40.861 "trtype": "TCP", 00:15:40.861 "adrfam": "IPv4", 00:15:40.861 "traddr": "10.0.0.1", 00:15:40.861 "trsvcid": "40518" 00:15:40.861 }, 00:15:40.861 "auth": { 00:15:40.861 "state": "completed", 00:15:40.861 "digest": "sha256", 00:15:40.861 "dhgroup": "ffdhe8192" 00:15:40.861 } 00:15:40.861 } 00:15:40.861 ]' 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.861 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.120 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:41.120 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:41.687 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.687 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:41.687 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.687 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.687 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.688 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.688 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:41.688 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:41.946 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:41.946 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.947 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.947 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:41.947 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:41.947 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.947 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.947 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.947 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.947 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.947 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.947 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.947 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.205 00:15:42.205 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.205 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.205 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.465 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.465 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.465 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.465 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.465 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.465 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.465 { 00:15:42.465 "cntlid": 43, 00:15:42.465 "qid": 0, 00:15:42.465 "state": "enabled", 00:15:42.465 "thread": "nvmf_tgt_poll_group_000", 00:15:42.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:42.465 "listen_address": { 00:15:42.465 "trtype": "TCP", 00:15:42.465 "adrfam": "IPv4", 00:15:42.465 "traddr": "10.0.0.2", 00:15:42.465 "trsvcid": "4420" 00:15:42.465 }, 00:15:42.465 "peer_address": { 00:15:42.465 "trtype": "TCP", 00:15:42.465 "adrfam": "IPv4", 00:15:42.465 "traddr": "10.0.0.1", 00:15:42.465 "trsvcid": "40538" 00:15:42.465 }, 00:15:42.465 "auth": { 00:15:42.465 "state": "completed", 00:15:42.465 "digest": "sha256", 00:15:42.465 "dhgroup": "ffdhe8192" 00:15:42.465 } 00:15:42.465 } 00:15:42.465 ]' 00:15:42.465 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.465 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.465 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.723 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.723 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.724 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.724 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.724 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.982 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:42.982 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.548 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.113 00:15:44.113 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.113 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.113 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.371 { 00:15:44.371 "cntlid": 45, 00:15:44.371 "qid": 0, 00:15:44.371 "state": "enabled", 00:15:44.371 "thread": "nvmf_tgt_poll_group_000", 00:15:44.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:44.371 "listen_address": { 00:15:44.371 "trtype": "TCP", 00:15:44.371 "adrfam": "IPv4", 00:15:44.371 "traddr": "10.0.0.2", 00:15:44.371 "trsvcid": "4420" 00:15:44.371 }, 00:15:44.371 "peer_address": { 00:15:44.371 "trtype": "TCP", 00:15:44.371 "adrfam": "IPv4", 00:15:44.371 "traddr": "10.0.0.1", 00:15:44.371 "trsvcid": "40574" 00:15:44.371 }, 00:15:44.371 "auth": { 00:15:44.371 "state": "completed", 00:15:44.371 "digest": "sha256", 00:15:44.371 "dhgroup": "ffdhe8192" 00:15:44.371 } 00:15:44.371 } 00:15:44.371 ]' 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.371 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.630 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:44.630 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:45.222 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.222 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:45.222 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.222 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.222 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.222 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.222 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:45.222 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:45.481 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:45.481 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.481 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.481 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:45.481 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:45.481 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.481 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:45.481 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.481 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.481 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.481 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:45.481 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.481 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.049 00:15:46.049 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.049 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.049 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.049 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.049 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.049 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.049 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.049 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.049 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.049 { 00:15:46.049 "cntlid": 47, 00:15:46.049 "qid": 0, 00:15:46.049 "state": "enabled", 00:15:46.049 "thread": "nvmf_tgt_poll_group_000", 00:15:46.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:46.050 "listen_address": { 00:15:46.050 "trtype": "TCP", 00:15:46.050 "adrfam": "IPv4", 00:15:46.050 "traddr": "10.0.0.2", 00:15:46.050 "trsvcid": "4420" 00:15:46.050 }, 00:15:46.050 "peer_address": { 00:15:46.050 "trtype": "TCP", 00:15:46.050 "adrfam": "IPv4", 00:15:46.050 "traddr": "10.0.0.1", 00:15:46.050 "trsvcid": "40618" 00:15:46.050 }, 00:15:46.050 "auth": { 00:15:46.050 "state": "completed", 00:15:46.050 "digest": "sha256", 00:15:46.050 "dhgroup": "ffdhe8192" 00:15:46.050 } 00:15:46.050 } 00:15:46.050 ]' 00:15:46.050 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.050 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.050 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.308 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.308 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.308 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.308 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.308 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.567 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:46.567 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.135 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.394 00:15:47.394 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.394 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.394 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.653 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.653 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.653 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.653 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.653 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.653 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.653 { 00:15:47.653 "cntlid": 49, 00:15:47.653 "qid": 0, 00:15:47.653 "state": "enabled", 00:15:47.653 "thread": "nvmf_tgt_poll_group_000", 00:15:47.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:47.653 "listen_address": { 00:15:47.653 "trtype": "TCP", 00:15:47.653 "adrfam": "IPv4", 00:15:47.653 "traddr": "10.0.0.2", 00:15:47.653 "trsvcid": "4420" 00:15:47.653 }, 00:15:47.653 "peer_address": { 00:15:47.653 "trtype": "TCP", 00:15:47.653 "adrfam": "IPv4", 00:15:47.653 "traddr": "10.0.0.1", 00:15:47.653 "trsvcid": "40652" 00:15:47.653 }, 00:15:47.653 "auth": { 00:15:47.653 "state": "completed", 00:15:47.653 "digest": "sha384", 00:15:47.653 "dhgroup": "null" 00:15:47.653 } 00:15:47.653 } 00:15:47.653 ]' 00:15:47.653 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.653 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.653 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.653 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.653 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.912 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.912 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.912 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.912 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:47.912 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:48.479 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.479 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:48.479 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.479 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.479 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.479 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.479 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.479 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.739 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:48.739 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.739 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.739 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:48.739 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.739 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.739 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.739 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.739 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.739 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.739 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.739 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.739 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.997 00:15:48.997 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.997 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.997 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.256 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.256 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.256 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.256 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.256 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.256 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.256 { 00:15:49.256 "cntlid": 51, 00:15:49.256 "qid": 0, 00:15:49.256 "state": "enabled", 00:15:49.256 "thread": "nvmf_tgt_poll_group_000", 00:15:49.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:49.256 "listen_address": { 00:15:49.256 "trtype": "TCP", 00:15:49.256 "adrfam": "IPv4", 00:15:49.256 "traddr": "10.0.0.2", 00:15:49.256 "trsvcid": "4420" 00:15:49.256 }, 00:15:49.256 "peer_address": { 00:15:49.256 "trtype": "TCP", 00:15:49.256 "adrfam": "IPv4", 00:15:49.256 "traddr": "10.0.0.1", 00:15:49.256 "trsvcid": "39120" 00:15:49.256 }, 00:15:49.256 "auth": { 00:15:49.256 "state": "completed", 00:15:49.256 "digest": "sha384", 00:15:49.256 "dhgroup": "null" 00:15:49.256 } 00:15:49.256 } 00:15:49.256 ]' 00:15:49.257 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.257 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.257 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.257 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.257 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.257 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.257 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.257 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.515 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:49.515 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:50.082 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.082 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:50.083 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.083 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.083 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.083 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.083 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:50.083 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:50.341 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:50.341 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.341 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.341 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:50.341 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:50.341 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.341 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.341 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.341 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.341 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.341 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.341 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.341 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.601 00:15:50.601 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.601 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.601 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.860 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.860 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.860 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.860 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.860 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.860 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.860 { 00:15:50.860 "cntlid": 53, 00:15:50.860 "qid": 0, 00:15:50.860 "state": "enabled", 00:15:50.860 "thread": "nvmf_tgt_poll_group_000", 00:15:50.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:50.860 "listen_address": { 00:15:50.860 "trtype": "TCP", 00:15:50.860 "adrfam": "IPv4", 00:15:50.860 "traddr": "10.0.0.2", 00:15:50.860 "trsvcid": "4420" 00:15:50.860 }, 00:15:50.860 "peer_address": { 00:15:50.860 "trtype": "TCP", 00:15:50.860 "adrfam": "IPv4", 00:15:50.860 "traddr": "10.0.0.1", 00:15:50.860 "trsvcid": "39146" 00:15:50.860 }, 00:15:50.860 "auth": { 00:15:50.860 "state": "completed", 00:15:50.860 "digest": "sha384", 00:15:50.860 "dhgroup": "null" 00:15:50.860 } 00:15:50.860 } 00:15:50.860 ]' 00:15:50.860 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.860 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.860 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.860 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:50.860 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.860 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.860 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.860 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.134 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:51.134 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:51.704 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.704 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:51.704 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.704 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.704 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.704 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.704 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:51.704 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:51.962 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:51.962 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.962 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.962 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:51.962 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:51.962 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.962 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:51.962 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.962 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.962 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.962 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:51.962 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.962 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.221 00:15:52.221 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.221 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.221 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.221 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.221 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.221 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.221 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.221 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.221 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.221 { 00:15:52.222 "cntlid": 55, 00:15:52.222 "qid": 0, 00:15:52.222 "state": "enabled", 00:15:52.222 "thread": "nvmf_tgt_poll_group_000", 00:15:52.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:52.222 "listen_address": { 00:15:52.222 "trtype": "TCP", 00:15:52.222 "adrfam": "IPv4", 00:15:52.222 "traddr": "10.0.0.2", 00:15:52.222 "trsvcid": "4420" 00:15:52.222 }, 00:15:52.222 "peer_address": { 00:15:52.222 "trtype": "TCP", 00:15:52.222 "adrfam": "IPv4", 00:15:52.222 "traddr": "10.0.0.1", 00:15:52.222 "trsvcid": "39188" 00:15:52.222 }, 00:15:52.222 "auth": { 00:15:52.222 "state": "completed", 00:15:52.222 "digest": "sha384", 00:15:52.222 "dhgroup": "null" 00:15:52.222 } 00:15:52.222 } 00:15:52.222 ]' 00:15:52.222 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.481 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.481 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.481 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:52.481 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.481 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.481 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.481 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.740 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:52.740 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.308 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.567 00:15:53.567 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.567 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.567 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.826 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.826 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.826 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.826 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.826 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.826 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.826 { 00:15:53.826 "cntlid": 57, 00:15:53.826 "qid": 0, 00:15:53.826 "state": "enabled", 00:15:53.826 "thread": "nvmf_tgt_poll_group_000", 00:15:53.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:53.826 "listen_address": { 00:15:53.826 "trtype": "TCP", 00:15:53.826 "adrfam": "IPv4", 00:15:53.826 "traddr": "10.0.0.2", 00:15:53.826 "trsvcid": "4420" 00:15:53.826 }, 00:15:53.826 "peer_address": { 00:15:53.826 "trtype": "TCP", 00:15:53.826 "adrfam": "IPv4", 00:15:53.826 "traddr": "10.0.0.1", 00:15:53.826 "trsvcid": "39224" 00:15:53.826 }, 00:15:53.826 "auth": { 00:15:53.826 "state": "completed", 00:15:53.826 "digest": "sha384", 00:15:53.826 "dhgroup": "ffdhe2048" 00:15:53.826 } 00:15:53.826 } 00:15:53.826 ]' 00:15:53.826 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.826 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.826 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.826 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:53.826 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.085 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.085 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.085 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.085 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:54.085 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:15:54.653 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.653 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:54.653 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.653 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.653 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.653 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.653 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:54.653 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:54.912 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:54.912 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.912 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.912 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:54.912 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:54.912 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.912 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.912 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.912 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.912 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.912 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.912 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.912 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.171 00:15:55.171 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.171 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.171 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.431 { 00:15:55.431 "cntlid": 59, 00:15:55.431 "qid": 0, 00:15:55.431 "state": "enabled", 00:15:55.431 "thread": "nvmf_tgt_poll_group_000", 00:15:55.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:55.431 "listen_address": { 00:15:55.431 "trtype": "TCP", 00:15:55.431 "adrfam": "IPv4", 00:15:55.431 "traddr": "10.0.0.2", 00:15:55.431 "trsvcid": "4420" 00:15:55.431 }, 00:15:55.431 "peer_address": { 00:15:55.431 "trtype": "TCP", 00:15:55.431 "adrfam": "IPv4", 00:15:55.431 "traddr": "10.0.0.1", 00:15:55.431 "trsvcid": "39262" 00:15:55.431 }, 00:15:55.431 "auth": { 00:15:55.431 "state": "completed", 00:15:55.431 "digest": "sha384", 00:15:55.431 "dhgroup": "ffdhe2048" 00:15:55.431 } 00:15:55.431 } 00:15:55.431 ]' 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.431 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.690 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:55.690 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:15:56.258 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.258 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:56.258 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.258 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.258 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.258 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.258 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:56.258 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:56.518 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:56.518 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.518 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.518 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:56.518 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:56.518 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.518 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.518 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.518 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.518 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.518 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.518 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.518 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.777 00:15:56.777 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.777 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.777 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.036 { 00:15:57.036 "cntlid": 61, 00:15:57.036 "qid": 0, 00:15:57.036 "state": "enabled", 00:15:57.036 "thread": "nvmf_tgt_poll_group_000", 00:15:57.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:57.036 "listen_address": { 00:15:57.036 "trtype": "TCP", 00:15:57.036 "adrfam": "IPv4", 00:15:57.036 "traddr": "10.0.0.2", 00:15:57.036 "trsvcid": "4420" 00:15:57.036 }, 00:15:57.036 "peer_address": { 00:15:57.036 "trtype": "TCP", 00:15:57.036 "adrfam": "IPv4", 00:15:57.036 "traddr": "10.0.0.1", 00:15:57.036 "trsvcid": "39284" 00:15:57.036 }, 00:15:57.036 "auth": { 00:15:57.036 "state": "completed", 00:15:57.036 "digest": "sha384", 00:15:57.036 "dhgroup": "ffdhe2048" 00:15:57.036 } 00:15:57.036 } 00:15:57.036 ]' 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.036 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.295 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:57.295 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:15:57.863 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.863 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:57.863 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.863 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.863 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.863 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.863 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:57.863 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:58.131 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:58.131 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.131 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.131 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:58.131 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.131 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.131 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:58.131 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.131 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.131 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.131 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.131 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.131 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.393 00:15:58.393 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.393 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.393 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.393 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.393 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.393 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.393 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.651 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.652 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.652 { 00:15:58.652 "cntlid": 63, 00:15:58.652 "qid": 0, 00:15:58.652 "state": "enabled", 00:15:58.652 "thread": "nvmf_tgt_poll_group_000", 00:15:58.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:58.652 "listen_address": { 00:15:58.652 "trtype": "TCP", 00:15:58.652 "adrfam": "IPv4", 00:15:58.652 "traddr": "10.0.0.2", 00:15:58.652 "trsvcid": "4420" 00:15:58.652 }, 00:15:58.652 "peer_address": { 00:15:58.652 "trtype": "TCP", 00:15:58.652 "adrfam": "IPv4", 00:15:58.652 "traddr": "10.0.0.1", 00:15:58.652 "trsvcid": "39326" 00:15:58.652 }, 00:15:58.652 "auth": { 00:15:58.652 "state": "completed", 00:15:58.652 "digest": "sha384", 00:15:58.652 "dhgroup": "ffdhe2048" 00:15:58.652 } 00:15:58.652 } 00:15:58.652 ]' 00:15:58.652 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.652 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.652 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.652 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.652 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.652 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.652 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.652 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.910 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:58.910 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:15:59.478 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.478 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:59.478 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.479 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.479 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.479 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.479 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.479 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:59.479 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:59.737 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:59.737 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.737 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.737 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:59.737 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:59.737 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.737 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.737 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.737 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.737 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.737 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.737 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.737 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.996 00:15:59.996 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.996 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.996 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.996 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.996 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.996 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.996 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.255 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.255 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.255 { 00:16:00.255 "cntlid": 65, 00:16:00.255 "qid": 0, 00:16:00.255 "state": "enabled", 00:16:00.255 "thread": "nvmf_tgt_poll_group_000", 00:16:00.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:00.255 "listen_address": { 00:16:00.255 "trtype": "TCP", 00:16:00.255 "adrfam": "IPv4", 00:16:00.255 "traddr": "10.0.0.2", 00:16:00.255 "trsvcid": "4420" 00:16:00.255 }, 00:16:00.255 "peer_address": { 00:16:00.255 "trtype": "TCP", 00:16:00.255 "adrfam": "IPv4", 00:16:00.255 "traddr": "10.0.0.1", 00:16:00.255 "trsvcid": "41580" 00:16:00.255 }, 00:16:00.255 "auth": { 00:16:00.255 "state": "completed", 00:16:00.255 "digest": "sha384", 00:16:00.255 "dhgroup": "ffdhe3072" 00:16:00.255 } 00:16:00.255 } 00:16:00.255 ]' 00:16:00.255 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.255 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.255 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.255 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.255 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.255 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.255 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.255 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.514 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:00.514 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:01.081 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.081 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:01.081 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.081 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.081 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.081 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.081 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:01.081 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:01.339 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:01.339 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.339 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.339 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:01.339 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:01.339 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.339 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.339 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.339 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.339 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.339 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.339 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.339 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.597 00:16:01.597 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.597 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.597 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.597 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.597 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.597 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.597 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.597 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.857 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.857 { 00:16:01.857 "cntlid": 67, 00:16:01.857 "qid": 0, 00:16:01.857 "state": "enabled", 00:16:01.857 "thread": "nvmf_tgt_poll_group_000", 00:16:01.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:01.857 "listen_address": { 00:16:01.857 "trtype": "TCP", 00:16:01.857 "adrfam": "IPv4", 00:16:01.857 "traddr": "10.0.0.2", 00:16:01.857 "trsvcid": "4420" 00:16:01.857 }, 00:16:01.857 "peer_address": { 00:16:01.857 "trtype": "TCP", 00:16:01.857 "adrfam": "IPv4", 00:16:01.857 "traddr": "10.0.0.1", 00:16:01.857 "trsvcid": "41602" 00:16:01.857 }, 00:16:01.857 "auth": { 00:16:01.857 "state": "completed", 00:16:01.857 "digest": "sha384", 00:16:01.857 "dhgroup": "ffdhe3072" 00:16:01.857 } 00:16:01.857 } 00:16:01.857 ]' 00:16:01.857 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.857 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.857 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.857 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:01.857 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.857 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.857 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.857 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.115 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:02.115 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:02.683 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.683 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:02.683 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.683 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.683 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.683 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.683 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:02.683 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:02.942 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:02.942 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.942 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.942 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:02.942 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.942 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.942 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.942 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.942 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.942 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.942 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.942 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.942 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.201 00:16:03.201 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.201 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.201 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.460 { 00:16:03.460 "cntlid": 69, 00:16:03.460 "qid": 0, 00:16:03.460 "state": "enabled", 00:16:03.460 "thread": "nvmf_tgt_poll_group_000", 00:16:03.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:03.460 "listen_address": { 00:16:03.460 "trtype": "TCP", 00:16:03.460 "adrfam": "IPv4", 00:16:03.460 "traddr": "10.0.0.2", 00:16:03.460 "trsvcid": "4420" 00:16:03.460 }, 00:16:03.460 "peer_address": { 00:16:03.460 "trtype": "TCP", 00:16:03.460 "adrfam": "IPv4", 00:16:03.460 "traddr": "10.0.0.1", 00:16:03.460 "trsvcid": "41626" 00:16:03.460 }, 00:16:03.460 "auth": { 00:16:03.460 "state": "completed", 00:16:03.460 "digest": "sha384", 00:16:03.460 "dhgroup": "ffdhe3072" 00:16:03.460 } 00:16:03.460 } 00:16:03.460 ]' 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.460 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.719 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:03.719 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:04.287 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.287 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:04.287 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.287 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.287 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.287 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.287 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.287 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.546 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:04.546 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.546 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.546 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.546 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.546 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.546 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:04.546 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.546 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.546 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.546 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.546 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.546 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.805 00:16:04.805 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.805 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.805 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.805 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.805 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.805 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.805 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.805 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.063 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.063 { 00:16:05.063 "cntlid": 71, 00:16:05.063 "qid": 0, 00:16:05.063 "state": "enabled", 00:16:05.063 "thread": "nvmf_tgt_poll_group_000", 00:16:05.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:05.063 "listen_address": { 00:16:05.063 "trtype": "TCP", 00:16:05.063 "adrfam": "IPv4", 00:16:05.063 "traddr": "10.0.0.2", 00:16:05.063 "trsvcid": "4420" 00:16:05.063 }, 00:16:05.063 "peer_address": { 00:16:05.063 "trtype": "TCP", 00:16:05.063 "adrfam": "IPv4", 00:16:05.063 "traddr": "10.0.0.1", 00:16:05.063 "trsvcid": "41642" 00:16:05.063 }, 00:16:05.063 "auth": { 00:16:05.063 "state": "completed", 00:16:05.063 "digest": "sha384", 00:16:05.063 "dhgroup": "ffdhe3072" 00:16:05.063 } 00:16:05.063 } 00:16:05.063 ]' 00:16:05.063 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.063 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.063 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.063 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.063 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.063 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.063 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.063 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.322 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:05.322 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:05.890 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.890 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:05.890 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.890 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.890 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.890 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.890 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.890 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:05.890 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.149 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:06.149 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.149 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.149 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.149 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.149 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.149 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.149 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.149 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.149 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.149 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.149 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.149 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.415 00:16:06.415 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.415 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.415 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.415 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.674 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.674 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.674 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.674 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.674 { 00:16:06.674 "cntlid": 73, 00:16:06.674 "qid": 0, 00:16:06.674 "state": "enabled", 00:16:06.674 "thread": "nvmf_tgt_poll_group_000", 00:16:06.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:06.674 "listen_address": { 00:16:06.674 "trtype": "TCP", 00:16:06.674 "adrfam": "IPv4", 00:16:06.674 "traddr": "10.0.0.2", 00:16:06.674 "trsvcid": "4420" 00:16:06.674 }, 00:16:06.674 "peer_address": { 00:16:06.674 "trtype": "TCP", 00:16:06.674 "adrfam": "IPv4", 00:16:06.674 "traddr": "10.0.0.1", 00:16:06.674 "trsvcid": "41674" 00:16:06.674 }, 00:16:06.674 "auth": { 00:16:06.674 "state": "completed", 00:16:06.674 "digest": "sha384", 00:16:06.674 "dhgroup": "ffdhe4096" 00:16:06.674 } 00:16:06.674 } 00:16:06.674 ]' 00:16:06.674 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.674 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.674 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.674 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:06.675 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.675 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.675 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.675 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.934 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:06.934 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:07.501 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.501 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:07.501 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.501 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.501 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.501 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.501 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:07.501 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:07.760 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:07.760 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.760 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.760 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:07.760 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:07.760 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.760 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.760 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.760 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.760 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.760 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.760 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.760 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.019 00:16:08.019 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.019 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.019 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.019 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.019 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.019 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.019 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.019 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.019 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.019 { 00:16:08.019 "cntlid": 75, 00:16:08.019 "qid": 0, 00:16:08.019 "state": "enabled", 00:16:08.019 "thread": "nvmf_tgt_poll_group_000", 00:16:08.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:08.020 "listen_address": { 00:16:08.020 "trtype": "TCP", 00:16:08.020 "adrfam": "IPv4", 00:16:08.020 "traddr": "10.0.0.2", 00:16:08.020 "trsvcid": "4420" 00:16:08.020 }, 00:16:08.020 "peer_address": { 00:16:08.020 "trtype": "TCP", 00:16:08.020 "adrfam": "IPv4", 00:16:08.020 "traddr": "10.0.0.1", 00:16:08.020 "trsvcid": "41708" 00:16:08.020 }, 00:16:08.020 "auth": { 00:16:08.020 "state": "completed", 00:16:08.020 "digest": "sha384", 00:16:08.020 "dhgroup": "ffdhe4096" 00:16:08.020 } 00:16:08.020 } 00:16:08.020 ]' 00:16:08.277 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.277 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.277 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.277 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:08.277 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.277 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.277 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.278 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.536 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:08.536 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:09.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:09.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:09.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:09.363 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:09.363 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.363 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.363 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:09.363 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:09.363 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.363 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.363 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.363 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.363 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.363 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.363 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.363 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.621 00:16:09.621 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.621 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.621 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.880 { 00:16:09.880 "cntlid": 77, 00:16:09.880 "qid": 0, 00:16:09.880 "state": "enabled", 00:16:09.880 "thread": "nvmf_tgt_poll_group_000", 00:16:09.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:09.880 "listen_address": { 00:16:09.880 "trtype": "TCP", 00:16:09.880 "adrfam": "IPv4", 00:16:09.880 "traddr": "10.0.0.2", 00:16:09.880 "trsvcid": "4420" 00:16:09.880 }, 00:16:09.880 "peer_address": { 00:16:09.880 "trtype": "TCP", 00:16:09.880 "adrfam": "IPv4", 00:16:09.880 "traddr": "10.0.0.1", 00:16:09.880 "trsvcid": "34484" 00:16:09.880 }, 00:16:09.880 "auth": { 00:16:09.880 "state": "completed", 00:16:09.880 "digest": "sha384", 00:16:09.880 "dhgroup": "ffdhe4096" 00:16:09.880 } 00:16:09.880 } 00:16:09.880 ]' 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.880 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.139 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:10.139 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:10.706 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.706 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:10.706 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.706 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.706 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.706 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.706 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:10.706 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:10.965 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:10.965 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.965 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.965 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:10.965 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:10.965 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.965 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:10.965 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.965 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.965 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.965 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:10.965 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.965 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.223 00:16:11.223 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.223 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.223 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.482 { 00:16:11.482 "cntlid": 79, 00:16:11.482 "qid": 0, 00:16:11.482 "state": "enabled", 00:16:11.482 "thread": "nvmf_tgt_poll_group_000", 00:16:11.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:11.482 "listen_address": { 00:16:11.482 "trtype": "TCP", 00:16:11.482 "adrfam": "IPv4", 00:16:11.482 "traddr": "10.0.0.2", 00:16:11.482 "trsvcid": "4420" 00:16:11.482 }, 00:16:11.482 "peer_address": { 00:16:11.482 "trtype": "TCP", 00:16:11.482 "adrfam": "IPv4", 00:16:11.482 "traddr": "10.0.0.1", 00:16:11.482 "trsvcid": "34514" 00:16:11.482 }, 00:16:11.482 "auth": { 00:16:11.482 "state": "completed", 00:16:11.482 "digest": "sha384", 00:16:11.482 "dhgroup": "ffdhe4096" 00:16:11.482 } 00:16:11.482 } 00:16:11.482 ]' 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.482 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.741 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:11.741 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:12.309 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.309 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:12.309 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.309 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.309 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.309 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.309 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.310 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:12.310 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:12.568 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:12.568 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.568 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:12.568 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:12.568 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:12.568 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.568 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.568 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.568 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.568 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.568 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.568 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.568 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.827 00:16:12.827 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.827 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.827 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.086 { 00:16:13.086 "cntlid": 81, 00:16:13.086 "qid": 0, 00:16:13.086 "state": "enabled", 00:16:13.086 "thread": "nvmf_tgt_poll_group_000", 00:16:13.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:13.086 "listen_address": { 00:16:13.086 "trtype": "TCP", 00:16:13.086 "adrfam": "IPv4", 00:16:13.086 "traddr": "10.0.0.2", 00:16:13.086 "trsvcid": "4420" 00:16:13.086 }, 00:16:13.086 "peer_address": { 00:16:13.086 "trtype": "TCP", 00:16:13.086 "adrfam": "IPv4", 00:16:13.086 "traddr": "10.0.0.1", 00:16:13.086 "trsvcid": "34552" 00:16:13.086 }, 00:16:13.086 "auth": { 00:16:13.086 "state": "completed", 00:16:13.086 "digest": "sha384", 00:16:13.086 "dhgroup": "ffdhe6144" 00:16:13.086 } 00:16:13.086 } 00:16:13.086 ]' 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.086 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.345 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:13.345 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:13.912 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.912 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.912 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.912 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.912 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.912 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.912 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:13.912 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:14.171 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:14.171 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.171 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.171 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:14.171 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.171 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.171 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.171 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.171 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.171 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.171 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.171 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.171 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.430 00:16:14.430 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.430 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.430 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.689 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.689 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.689 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.689 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.689 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.689 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.689 { 00:16:14.689 "cntlid": 83, 00:16:14.689 "qid": 0, 00:16:14.689 "state": "enabled", 00:16:14.689 "thread": "nvmf_tgt_poll_group_000", 00:16:14.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:14.689 "listen_address": { 00:16:14.689 "trtype": "TCP", 00:16:14.689 "adrfam": "IPv4", 00:16:14.689 "traddr": "10.0.0.2", 00:16:14.689 "trsvcid": "4420" 00:16:14.689 }, 00:16:14.689 "peer_address": { 00:16:14.689 "trtype": "TCP", 00:16:14.689 "adrfam": "IPv4", 00:16:14.689 "traddr": "10.0.0.1", 00:16:14.690 "trsvcid": "34578" 00:16:14.690 }, 00:16:14.690 "auth": { 00:16:14.690 "state": "completed", 00:16:14.690 "digest": "sha384", 00:16:14.690 "dhgroup": "ffdhe6144" 00:16:14.690 } 00:16:14.690 } 00:16:14.690 ]' 00:16:14.690 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.690 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.690 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.690 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.690 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.949 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.949 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.949 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.949 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:14.949 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:15.517 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.517 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:15.517 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.517 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.517 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.517 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.517 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:15.517 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:15.776 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:15.776 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.776 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.776 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:15.776 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:15.776 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.776 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.776 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.776 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.776 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.776 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.776 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.776 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.035 00:16:16.294 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.294 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.294 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.294 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.294 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.294 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.294 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.294 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.294 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.294 { 00:16:16.294 "cntlid": 85, 00:16:16.294 "qid": 0, 00:16:16.294 "state": "enabled", 00:16:16.294 "thread": "nvmf_tgt_poll_group_000", 00:16:16.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:16.294 "listen_address": { 00:16:16.294 "trtype": "TCP", 00:16:16.294 "adrfam": "IPv4", 00:16:16.294 "traddr": "10.0.0.2", 00:16:16.294 "trsvcid": "4420" 00:16:16.294 }, 00:16:16.294 "peer_address": { 00:16:16.294 "trtype": "TCP", 00:16:16.294 "adrfam": "IPv4", 00:16:16.294 "traddr": "10.0.0.1", 00:16:16.294 "trsvcid": "34602" 00:16:16.294 }, 00:16:16.294 "auth": { 00:16:16.294 "state": "completed", 00:16:16.294 "digest": "sha384", 00:16:16.294 "dhgroup": "ffdhe6144" 00:16:16.294 } 00:16:16.294 } 00:16:16.294 ]' 00:16:16.294 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.294 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.294 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.553 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:16.553 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.553 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.553 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.553 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.811 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:16.811 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.382 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.026 00:16:18.026 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.026 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.026 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.026 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.026 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.026 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.026 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.026 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.026 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.026 { 00:16:18.026 "cntlid": 87, 00:16:18.026 "qid": 0, 00:16:18.026 "state": "enabled", 00:16:18.026 "thread": "nvmf_tgt_poll_group_000", 00:16:18.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:18.026 "listen_address": { 00:16:18.026 "trtype": "TCP", 00:16:18.026 "adrfam": "IPv4", 00:16:18.026 "traddr": "10.0.0.2", 00:16:18.026 "trsvcid": "4420" 00:16:18.026 }, 00:16:18.026 "peer_address": { 00:16:18.026 "trtype": "TCP", 00:16:18.026 "adrfam": "IPv4", 00:16:18.026 "traddr": "10.0.0.1", 00:16:18.026 "trsvcid": "34636" 00:16:18.026 }, 00:16:18.026 "auth": { 00:16:18.026 "state": "completed", 00:16:18.026 "digest": "sha384", 00:16:18.026 "dhgroup": "ffdhe6144" 00:16:18.026 } 00:16:18.026 } 00:16:18.026 ]' 00:16:18.026 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.026 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.026 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.026 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.026 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.346 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.346 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.346 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.346 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:18.346 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:18.914 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.914 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:18.914 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.914 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.914 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.914 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.914 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.914 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.914 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.173 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:19.173 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.173 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.173 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:19.173 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.174 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.174 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.174 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.174 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.174 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.174 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.174 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.174 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.741 00:16:19.741 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.741 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.741 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.741 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.741 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.741 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.741 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.741 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.741 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.741 { 00:16:19.741 "cntlid": 89, 00:16:19.741 "qid": 0, 00:16:19.741 "state": "enabled", 00:16:19.741 "thread": "nvmf_tgt_poll_group_000", 00:16:19.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:19.741 "listen_address": { 00:16:19.741 "trtype": "TCP", 00:16:19.741 "adrfam": "IPv4", 00:16:19.741 "traddr": "10.0.0.2", 00:16:19.741 "trsvcid": "4420" 00:16:19.741 }, 00:16:19.741 "peer_address": { 00:16:19.741 "trtype": "TCP", 00:16:19.741 "adrfam": "IPv4", 00:16:19.741 "traddr": "10.0.0.1", 00:16:19.741 "trsvcid": "49836" 00:16:19.741 }, 00:16:19.741 "auth": { 00:16:19.741 "state": "completed", 00:16:19.741 "digest": "sha384", 00:16:19.741 "dhgroup": "ffdhe8192" 00:16:19.741 } 00:16:19.741 } 00:16:19.741 ]' 00:16:19.741 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.000 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.000 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.000 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.000 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.000 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.000 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.000 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.258 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:20.258 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:20.826 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.826 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:20.826 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.826 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.826 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.826 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.826 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.826 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.827 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:20.827 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.827 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.827 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:20.827 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.827 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.827 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.827 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.827 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.827 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.827 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.827 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.827 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.395 00:16:21.395 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.395 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.395 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.654 { 00:16:21.654 "cntlid": 91, 00:16:21.654 "qid": 0, 00:16:21.654 "state": "enabled", 00:16:21.654 "thread": "nvmf_tgt_poll_group_000", 00:16:21.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:21.654 "listen_address": { 00:16:21.654 "trtype": "TCP", 00:16:21.654 "adrfam": "IPv4", 00:16:21.654 "traddr": "10.0.0.2", 00:16:21.654 "trsvcid": "4420" 00:16:21.654 }, 00:16:21.654 "peer_address": { 00:16:21.654 "trtype": "TCP", 00:16:21.654 "adrfam": "IPv4", 00:16:21.654 "traddr": "10.0.0.1", 00:16:21.654 "trsvcid": "49850" 00:16:21.654 }, 00:16:21.654 "auth": { 00:16:21.654 "state": "completed", 00:16:21.654 "digest": "sha384", 00:16:21.654 "dhgroup": "ffdhe8192" 00:16:21.654 } 00:16:21.654 } 00:16:21.654 ]' 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.654 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.913 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:21.913 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:22.481 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.482 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:22.482 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.482 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.482 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.482 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.482 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:22.482 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:22.740 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:22.740 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.740 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.740 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:22.740 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:22.740 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.740 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.740 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.740 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.740 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.740 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.740 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.740 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.308 00:16:23.308 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.308 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.308 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.308 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.308 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.308 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.308 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.308 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.308 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.308 { 00:16:23.308 "cntlid": 93, 00:16:23.308 "qid": 0, 00:16:23.308 "state": "enabled", 00:16:23.308 "thread": "nvmf_tgt_poll_group_000", 00:16:23.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:23.308 "listen_address": { 00:16:23.308 "trtype": "TCP", 00:16:23.308 "adrfam": "IPv4", 00:16:23.308 "traddr": "10.0.0.2", 00:16:23.308 "trsvcid": "4420" 00:16:23.308 }, 00:16:23.308 "peer_address": { 00:16:23.308 "trtype": "TCP", 00:16:23.308 "adrfam": "IPv4", 00:16:23.308 "traddr": "10.0.0.1", 00:16:23.308 "trsvcid": "49862" 00:16:23.308 }, 00:16:23.308 "auth": { 00:16:23.308 "state": "completed", 00:16:23.308 "digest": "sha384", 00:16:23.308 "dhgroup": "ffdhe8192" 00:16:23.308 } 00:16:23.308 } 00:16:23.308 ]' 00:16:23.308 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.567 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.567 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.567 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.567 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.567 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.567 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.567 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.826 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:23.826 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:24.394 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.394 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.394 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.394 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.394 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.394 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.395 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.395 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.654 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:24.654 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.654 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.654 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:24.654 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.654 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.654 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:24.654 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.654 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.654 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.654 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.654 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.654 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.913 00:16:24.913 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.913 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.913 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.171 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.171 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.171 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.171 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.171 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.171 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.171 { 00:16:25.171 "cntlid": 95, 00:16:25.171 "qid": 0, 00:16:25.171 "state": "enabled", 00:16:25.171 "thread": "nvmf_tgt_poll_group_000", 00:16:25.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:25.171 "listen_address": { 00:16:25.171 "trtype": "TCP", 00:16:25.171 "adrfam": "IPv4", 00:16:25.171 "traddr": "10.0.0.2", 00:16:25.171 "trsvcid": "4420" 00:16:25.171 }, 00:16:25.171 "peer_address": { 00:16:25.171 "trtype": "TCP", 00:16:25.171 "adrfam": "IPv4", 00:16:25.171 "traddr": "10.0.0.1", 00:16:25.171 "trsvcid": "49886" 00:16:25.171 }, 00:16:25.171 "auth": { 00:16:25.171 "state": "completed", 00:16:25.171 "digest": "sha384", 00:16:25.171 "dhgroup": "ffdhe8192" 00:16:25.171 } 00:16:25.171 } 00:16:25.171 ]' 00:16:25.171 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.171 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.171 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.430 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:25.430 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.430 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.430 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.430 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.689 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:25.689 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.257 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.515 00:16:26.515 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.515 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.515 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.775 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.775 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.775 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.775 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.775 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.775 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.775 { 00:16:26.775 "cntlid": 97, 00:16:26.775 "qid": 0, 00:16:26.775 "state": "enabled", 00:16:26.775 "thread": "nvmf_tgt_poll_group_000", 00:16:26.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:26.775 "listen_address": { 00:16:26.775 "trtype": "TCP", 00:16:26.775 "adrfam": "IPv4", 00:16:26.775 "traddr": "10.0.0.2", 00:16:26.775 "trsvcid": "4420" 00:16:26.775 }, 00:16:26.775 "peer_address": { 00:16:26.775 "trtype": "TCP", 00:16:26.775 "adrfam": "IPv4", 00:16:26.775 "traddr": "10.0.0.1", 00:16:26.775 "trsvcid": "49916" 00:16:26.775 }, 00:16:26.775 "auth": { 00:16:26.775 "state": "completed", 00:16:26.775 "digest": "sha512", 00:16:26.775 "dhgroup": "null" 00:16:26.775 } 00:16:26.775 } 00:16:26.775 ]' 00:16:26.775 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.775 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.775 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.775 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.775 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.034 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.034 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.034 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.034 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:27.034 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:27.602 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.602 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:27.602 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.602 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.602 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.602 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.602 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:27.602 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:27.861 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:27.861 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.861 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.861 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.861 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.861 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.861 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.861 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.861 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.861 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.861 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.861 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.861 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.119 00:16:28.119 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.119 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.119 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.377 { 00:16:28.377 "cntlid": 99, 00:16:28.377 "qid": 0, 00:16:28.377 "state": "enabled", 00:16:28.377 "thread": "nvmf_tgt_poll_group_000", 00:16:28.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:28.377 "listen_address": { 00:16:28.377 "trtype": "TCP", 00:16:28.377 "adrfam": "IPv4", 00:16:28.377 "traddr": "10.0.0.2", 00:16:28.377 "trsvcid": "4420" 00:16:28.377 }, 00:16:28.377 "peer_address": { 00:16:28.377 "trtype": "TCP", 00:16:28.377 "adrfam": "IPv4", 00:16:28.377 "traddr": "10.0.0.1", 00:16:28.377 "trsvcid": "49932" 00:16:28.377 }, 00:16:28.377 "auth": { 00:16:28.377 "state": "completed", 00:16:28.377 "digest": "sha512", 00:16:28.377 "dhgroup": "null" 00:16:28.377 } 00:16:28.377 } 00:16:28.377 ]' 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.377 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.636 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:28.636 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:29.204 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.204 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:29.204 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.204 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.204 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.204 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.204 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:29.204 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:29.463 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:29.463 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.463 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.463 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.463 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.463 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.463 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.463 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.463 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.463 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.463 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.463 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.463 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.722 00:16:29.722 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.722 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.722 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.982 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.982 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.982 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.982 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.982 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.982 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.982 { 00:16:29.982 "cntlid": 101, 00:16:29.982 "qid": 0, 00:16:29.982 "state": "enabled", 00:16:29.982 "thread": "nvmf_tgt_poll_group_000", 00:16:29.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:29.982 "listen_address": { 00:16:29.982 "trtype": "TCP", 00:16:29.982 "adrfam": "IPv4", 00:16:29.982 "traddr": "10.0.0.2", 00:16:29.982 "trsvcid": "4420" 00:16:29.982 }, 00:16:29.982 "peer_address": { 00:16:29.982 "trtype": "TCP", 00:16:29.982 "adrfam": "IPv4", 00:16:29.982 "traddr": "10.0.0.1", 00:16:29.982 "trsvcid": "59046" 00:16:29.982 }, 00:16:29.982 "auth": { 00:16:29.982 "state": "completed", 00:16:29.982 "digest": "sha512", 00:16:29.982 "dhgroup": "null" 00:16:29.982 } 00:16:29.982 } 00:16:29.982 ]' 00:16:29.982 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.982 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.982 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.982 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:29.982 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.241 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.241 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.241 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.241 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:30.241 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:30.809 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.809 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:30.809 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.809 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.809 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.809 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.809 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:30.809 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:31.068 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:31.068 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.068 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.068 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:31.068 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.068 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.068 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:31.068 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.068 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.068 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.068 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.069 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.069 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.327 00:16:31.327 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.328 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.328 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.586 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.586 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.586 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.587 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.587 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.587 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.587 { 00:16:31.587 "cntlid": 103, 00:16:31.587 "qid": 0, 00:16:31.587 "state": "enabled", 00:16:31.587 "thread": "nvmf_tgt_poll_group_000", 00:16:31.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:31.587 "listen_address": { 00:16:31.587 "trtype": "TCP", 00:16:31.587 "adrfam": "IPv4", 00:16:31.587 "traddr": "10.0.0.2", 00:16:31.587 "trsvcid": "4420" 00:16:31.587 }, 00:16:31.587 "peer_address": { 00:16:31.587 "trtype": "TCP", 00:16:31.587 "adrfam": "IPv4", 00:16:31.587 "traddr": "10.0.0.1", 00:16:31.587 "trsvcid": "59080" 00:16:31.587 }, 00:16:31.587 "auth": { 00:16:31.587 "state": "completed", 00:16:31.587 "digest": "sha512", 00:16:31.587 "dhgroup": "null" 00:16:31.587 } 00:16:31.587 } 00:16:31.587 ]' 00:16:31.587 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.587 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.587 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.587 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:31.587 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.587 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.587 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.587 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.846 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:31.846 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:32.416 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.416 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.416 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.416 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.416 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.416 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.416 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.416 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:32.416 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:32.676 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:32.677 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.677 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.677 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.677 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.677 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.677 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.677 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.677 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.677 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.677 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.677 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.677 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.936 00:16:32.936 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.936 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.936 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.196 { 00:16:33.196 "cntlid": 105, 00:16:33.196 "qid": 0, 00:16:33.196 "state": "enabled", 00:16:33.196 "thread": "nvmf_tgt_poll_group_000", 00:16:33.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:33.196 "listen_address": { 00:16:33.196 "trtype": "TCP", 00:16:33.196 "adrfam": "IPv4", 00:16:33.196 "traddr": "10.0.0.2", 00:16:33.196 "trsvcid": "4420" 00:16:33.196 }, 00:16:33.196 "peer_address": { 00:16:33.196 "trtype": "TCP", 00:16:33.196 "adrfam": "IPv4", 00:16:33.196 "traddr": "10.0.0.1", 00:16:33.196 "trsvcid": "59110" 00:16:33.196 }, 00:16:33.196 "auth": { 00:16:33.196 "state": "completed", 00:16:33.196 "digest": "sha512", 00:16:33.196 "dhgroup": "ffdhe2048" 00:16:33.196 } 00:16:33.196 } 00:16:33.196 ]' 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.196 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.455 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:33.455 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:34.023 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.023 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:34.023 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.023 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.023 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.023 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.023 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:34.023 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:34.282 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:34.282 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.282 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.282 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.282 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.282 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.282 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.282 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.282 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.282 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.282 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.282 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.282 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.541 00:16:34.541 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.541 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.541 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.800 { 00:16:34.800 "cntlid": 107, 00:16:34.800 "qid": 0, 00:16:34.800 "state": "enabled", 00:16:34.800 "thread": "nvmf_tgt_poll_group_000", 00:16:34.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:34.800 "listen_address": { 00:16:34.800 "trtype": "TCP", 00:16:34.800 "adrfam": "IPv4", 00:16:34.800 "traddr": "10.0.0.2", 00:16:34.800 "trsvcid": "4420" 00:16:34.800 }, 00:16:34.800 "peer_address": { 00:16:34.800 "trtype": "TCP", 00:16:34.800 "adrfam": "IPv4", 00:16:34.800 "traddr": "10.0.0.1", 00:16:34.800 "trsvcid": "59142" 00:16:34.800 }, 00:16:34.800 "auth": { 00:16:34.800 "state": "completed", 00:16:34.800 "digest": "sha512", 00:16:34.800 "dhgroup": "ffdhe2048" 00:16:34.800 } 00:16:34.800 } 00:16:34.800 ]' 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.800 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.059 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:35.059 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:35.627 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.628 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.628 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.628 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.628 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.628 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.628 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.628 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.887 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:35.887 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.887 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.887 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.887 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.887 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.887 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.887 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.887 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.887 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.887 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.887 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.887 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.146 00:16:36.146 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.146 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.146 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.146 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.146 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.146 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.146 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.146 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.146 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.146 { 00:16:36.146 "cntlid": 109, 00:16:36.146 "qid": 0, 00:16:36.146 "state": "enabled", 00:16:36.146 "thread": "nvmf_tgt_poll_group_000", 00:16:36.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:36.146 "listen_address": { 00:16:36.146 "trtype": "TCP", 00:16:36.146 "adrfam": "IPv4", 00:16:36.146 "traddr": "10.0.0.2", 00:16:36.146 "trsvcid": "4420" 00:16:36.146 }, 00:16:36.146 "peer_address": { 00:16:36.146 "trtype": "TCP", 00:16:36.146 "adrfam": "IPv4", 00:16:36.146 "traddr": "10.0.0.1", 00:16:36.146 "trsvcid": "59174" 00:16:36.146 }, 00:16:36.146 "auth": { 00:16:36.146 "state": "completed", 00:16:36.146 "digest": "sha512", 00:16:36.146 "dhgroup": "ffdhe2048" 00:16:36.146 } 00:16:36.146 } 00:16:36.146 ]' 00:16:36.406 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.406 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.406 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.406 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.406 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.406 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.406 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.406 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.665 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:36.665 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.234 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.493 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.493 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.493 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.493 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.493 00:16:37.753 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.753 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.753 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.753 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.753 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.753 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.753 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.753 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.753 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.753 { 00:16:37.753 "cntlid": 111, 00:16:37.753 "qid": 0, 00:16:37.753 "state": "enabled", 00:16:37.753 "thread": "nvmf_tgt_poll_group_000", 00:16:37.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:37.753 "listen_address": { 00:16:37.753 "trtype": "TCP", 00:16:37.753 "adrfam": "IPv4", 00:16:37.753 "traddr": "10.0.0.2", 00:16:37.753 "trsvcid": "4420" 00:16:37.753 }, 00:16:37.753 "peer_address": { 00:16:37.753 "trtype": "TCP", 00:16:37.753 "adrfam": "IPv4", 00:16:37.753 "traddr": "10.0.0.1", 00:16:37.753 "trsvcid": "59198" 00:16:37.753 }, 00:16:37.753 "auth": { 00:16:37.753 "state": "completed", 00:16:37.753 "digest": "sha512", 00:16:37.753 "dhgroup": "ffdhe2048" 00:16:37.753 } 00:16:37.753 } 00:16:37.753 ]' 00:16:37.753 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.753 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.011 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.011 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.011 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.011 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.011 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.011 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.269 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:38.269 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:38.837 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.837 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.837 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.837 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.837 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.837 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.837 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.837 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:38.837 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:38.837 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:38.837 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.837 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.837 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.837 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.837 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.837 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.837 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.837 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.837 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.837 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.837 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.837 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.096 00:16:39.096 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.096 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.096 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.354 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.355 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.355 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.355 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.355 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.355 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.355 { 00:16:39.355 "cntlid": 113, 00:16:39.355 "qid": 0, 00:16:39.355 "state": "enabled", 00:16:39.355 "thread": "nvmf_tgt_poll_group_000", 00:16:39.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:39.355 "listen_address": { 00:16:39.355 "trtype": "TCP", 00:16:39.355 "adrfam": "IPv4", 00:16:39.355 "traddr": "10.0.0.2", 00:16:39.355 "trsvcid": "4420" 00:16:39.355 }, 00:16:39.355 "peer_address": { 00:16:39.355 "trtype": "TCP", 00:16:39.355 "adrfam": "IPv4", 00:16:39.355 "traddr": "10.0.0.1", 00:16:39.355 "trsvcid": "53512" 00:16:39.355 }, 00:16:39.355 "auth": { 00:16:39.355 "state": "completed", 00:16:39.355 "digest": "sha512", 00:16:39.355 "dhgroup": "ffdhe3072" 00:16:39.355 } 00:16:39.355 } 00:16:39.355 ]' 00:16:39.355 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.355 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.355 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.614 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.614 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.614 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.614 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.614 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.614 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:39.614 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.550 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.551 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.809 00:16:40.809 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.809 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.810 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.068 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.068 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.068 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.068 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.068 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.068 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.068 { 00:16:41.068 "cntlid": 115, 00:16:41.068 "qid": 0, 00:16:41.068 "state": "enabled", 00:16:41.068 "thread": "nvmf_tgt_poll_group_000", 00:16:41.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:41.068 "listen_address": { 00:16:41.068 "trtype": "TCP", 00:16:41.068 "adrfam": "IPv4", 00:16:41.068 "traddr": "10.0.0.2", 00:16:41.068 "trsvcid": "4420" 00:16:41.068 }, 00:16:41.068 "peer_address": { 00:16:41.068 "trtype": "TCP", 00:16:41.068 "adrfam": "IPv4", 00:16:41.068 "traddr": "10.0.0.1", 00:16:41.068 "trsvcid": "53552" 00:16:41.068 }, 00:16:41.068 "auth": { 00:16:41.068 "state": "completed", 00:16:41.068 "digest": "sha512", 00:16:41.068 "dhgroup": "ffdhe3072" 00:16:41.068 } 00:16:41.068 } 00:16:41.068 ]' 00:16:41.068 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.068 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.068 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.068 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.069 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.069 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.069 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.069 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.328 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:41.328 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:41.896 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.896 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:41.896 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.896 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.896 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.896 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.896 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:41.896 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:42.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:42.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:42.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.414 00:16:42.414 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.414 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.414 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.674 { 00:16:42.674 "cntlid": 117, 00:16:42.674 "qid": 0, 00:16:42.674 "state": "enabled", 00:16:42.674 "thread": "nvmf_tgt_poll_group_000", 00:16:42.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:42.674 "listen_address": { 00:16:42.674 "trtype": "TCP", 00:16:42.674 "adrfam": "IPv4", 00:16:42.674 "traddr": "10.0.0.2", 00:16:42.674 "trsvcid": "4420" 00:16:42.674 }, 00:16:42.674 "peer_address": { 00:16:42.674 "trtype": "TCP", 00:16:42.674 "adrfam": "IPv4", 00:16:42.674 "traddr": "10.0.0.1", 00:16:42.674 "trsvcid": "53584" 00:16:42.674 }, 00:16:42.674 "auth": { 00:16:42.674 "state": "completed", 00:16:42.674 "digest": "sha512", 00:16:42.674 "dhgroup": "ffdhe3072" 00:16:42.674 } 00:16:42.674 } 00:16:42.674 ]' 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.674 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.933 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:42.933 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:43.501 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.501 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:43.501 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.501 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.501 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.501 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.501 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.501 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.760 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:43.760 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.760 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.760 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.760 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.760 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.760 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:43.760 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.760 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.760 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.760 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.760 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.760 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.019 00:16:44.019 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.019 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.019 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.279 { 00:16:44.279 "cntlid": 119, 00:16:44.279 "qid": 0, 00:16:44.279 "state": "enabled", 00:16:44.279 "thread": "nvmf_tgt_poll_group_000", 00:16:44.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:44.279 "listen_address": { 00:16:44.279 "trtype": "TCP", 00:16:44.279 "adrfam": "IPv4", 00:16:44.279 "traddr": "10.0.0.2", 00:16:44.279 "trsvcid": "4420" 00:16:44.279 }, 00:16:44.279 "peer_address": { 00:16:44.279 "trtype": "TCP", 00:16:44.279 "adrfam": "IPv4", 00:16:44.279 "traddr": "10.0.0.1", 00:16:44.279 "trsvcid": "53616" 00:16:44.279 }, 00:16:44.279 "auth": { 00:16:44.279 "state": "completed", 00:16:44.279 "digest": "sha512", 00:16:44.279 "dhgroup": "ffdhe3072" 00:16:44.279 } 00:16:44.279 } 00:16:44.279 ]' 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.279 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.538 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:44.538 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:45.107 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.107 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:45.107 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.107 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.107 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.107 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.107 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.107 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:45.107 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:45.366 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:45.366 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.366 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.366 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.366 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.366 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.366 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.366 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.366 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.366 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.366 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.366 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.366 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.626 00:16:45.626 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.626 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.626 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.885 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.885 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.885 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.885 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.885 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.885 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.885 { 00:16:45.885 "cntlid": 121, 00:16:45.885 "qid": 0, 00:16:45.885 "state": "enabled", 00:16:45.885 "thread": "nvmf_tgt_poll_group_000", 00:16:45.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:45.885 "listen_address": { 00:16:45.885 "trtype": "TCP", 00:16:45.885 "adrfam": "IPv4", 00:16:45.885 "traddr": "10.0.0.2", 00:16:45.885 "trsvcid": "4420" 00:16:45.885 }, 00:16:45.885 "peer_address": { 00:16:45.885 "trtype": "TCP", 00:16:45.885 "adrfam": "IPv4", 00:16:45.885 "traddr": "10.0.0.1", 00:16:45.885 "trsvcid": "53626" 00:16:45.885 }, 00:16:45.885 "auth": { 00:16:45.885 "state": "completed", 00:16:45.885 "digest": "sha512", 00:16:45.885 "dhgroup": "ffdhe4096" 00:16:45.885 } 00:16:45.885 } 00:16:45.885 ]' 00:16:45.885 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.885 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.885 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.885 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.885 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.885 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.885 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.885 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.143 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:46.143 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:46.711 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.711 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.711 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.711 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.711 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.711 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.711 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.711 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.970 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:46.971 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.971 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.971 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.971 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:46.971 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.971 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.971 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.971 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.971 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.971 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.971 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.971 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.230 00:16:47.230 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.230 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.230 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.489 { 00:16:47.489 "cntlid": 123, 00:16:47.489 "qid": 0, 00:16:47.489 "state": "enabled", 00:16:47.489 "thread": "nvmf_tgt_poll_group_000", 00:16:47.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:47.489 "listen_address": { 00:16:47.489 "trtype": "TCP", 00:16:47.489 "adrfam": "IPv4", 00:16:47.489 "traddr": "10.0.0.2", 00:16:47.489 "trsvcid": "4420" 00:16:47.489 }, 00:16:47.489 "peer_address": { 00:16:47.489 "trtype": "TCP", 00:16:47.489 "adrfam": "IPv4", 00:16:47.489 "traddr": "10.0.0.1", 00:16:47.489 "trsvcid": "53644" 00:16:47.489 }, 00:16:47.489 "auth": { 00:16:47.489 "state": "completed", 00:16:47.489 "digest": "sha512", 00:16:47.489 "dhgroup": "ffdhe4096" 00:16:47.489 } 00:16:47.489 } 00:16:47.489 ]' 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.489 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.748 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:47.748 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:48.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:48.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:48.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:48.574 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:48.574 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.574 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.574 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:48.574 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:48.574 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.574 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.574 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.574 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.574 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.574 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.574 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.574 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.833 00:16:48.833 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.833 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.833 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.091 { 00:16:49.091 "cntlid": 125, 00:16:49.091 "qid": 0, 00:16:49.091 "state": "enabled", 00:16:49.091 "thread": "nvmf_tgt_poll_group_000", 00:16:49.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:49.091 "listen_address": { 00:16:49.091 "trtype": "TCP", 00:16:49.091 "adrfam": "IPv4", 00:16:49.091 "traddr": "10.0.0.2", 00:16:49.091 "trsvcid": "4420" 00:16:49.091 }, 00:16:49.091 "peer_address": { 00:16:49.091 "trtype": "TCP", 00:16:49.091 "adrfam": "IPv4", 00:16:49.091 "traddr": "10.0.0.1", 00:16:49.091 "trsvcid": "49168" 00:16:49.091 }, 00:16:49.091 "auth": { 00:16:49.091 "state": "completed", 00:16:49.091 "digest": "sha512", 00:16:49.091 "dhgroup": "ffdhe4096" 00:16:49.091 } 00:16:49.091 } 00:16:49.091 ]' 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.091 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.350 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:49.350 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:49.918 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.919 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.919 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.919 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.919 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.919 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.919 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:49.919 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:50.178 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:50.178 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.178 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.178 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:50.178 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.178 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.178 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:50.178 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.178 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.178 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.178 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.178 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.178 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.437 00:16:50.437 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.437 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.437 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.696 { 00:16:50.696 "cntlid": 127, 00:16:50.696 "qid": 0, 00:16:50.696 "state": "enabled", 00:16:50.696 "thread": "nvmf_tgt_poll_group_000", 00:16:50.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:50.696 "listen_address": { 00:16:50.696 "trtype": "TCP", 00:16:50.696 "adrfam": "IPv4", 00:16:50.696 "traddr": "10.0.0.2", 00:16:50.696 "trsvcid": "4420" 00:16:50.696 }, 00:16:50.696 "peer_address": { 00:16:50.696 "trtype": "TCP", 00:16:50.696 "adrfam": "IPv4", 00:16:50.696 "traddr": "10.0.0.1", 00:16:50.696 "trsvcid": "49202" 00:16:50.696 }, 00:16:50.696 "auth": { 00:16:50.696 "state": "completed", 00:16:50.696 "digest": "sha512", 00:16:50.696 "dhgroup": "ffdhe4096" 00:16:50.696 } 00:16:50.696 } 00:16:50.696 ]' 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.696 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.955 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:50.955 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:51.523 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.523 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:51.523 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.523 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.523 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.523 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.523 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.523 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:51.523 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:51.782 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:51.782 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.782 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.782 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.782 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:51.782 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.782 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.782 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.782 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.782 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.782 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.782 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.782 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.042 00:16:52.042 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.042 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.042 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.301 { 00:16:52.301 "cntlid": 129, 00:16:52.301 "qid": 0, 00:16:52.301 "state": "enabled", 00:16:52.301 "thread": "nvmf_tgt_poll_group_000", 00:16:52.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:52.301 "listen_address": { 00:16:52.301 "trtype": "TCP", 00:16:52.301 "adrfam": "IPv4", 00:16:52.301 "traddr": "10.0.0.2", 00:16:52.301 "trsvcid": "4420" 00:16:52.301 }, 00:16:52.301 "peer_address": { 00:16:52.301 "trtype": "TCP", 00:16:52.301 "adrfam": "IPv4", 00:16:52.301 "traddr": "10.0.0.1", 00:16:52.301 "trsvcid": "49224" 00:16:52.301 }, 00:16:52.301 "auth": { 00:16:52.301 "state": "completed", 00:16:52.301 "digest": "sha512", 00:16:52.301 "dhgroup": "ffdhe6144" 00:16:52.301 } 00:16:52.301 } 00:16:52.301 ]' 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.301 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.561 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:52.561 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:53.128 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.128 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:53.128 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.128 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.128 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.128 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.128 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.128 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.387 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:53.387 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.387 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.387 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.387 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:53.387 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.387 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.387 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.387 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.387 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.387 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.387 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.387 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.646 00:16:53.646 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.646 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.646 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.905 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.905 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.905 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.905 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.905 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.905 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.905 { 00:16:53.905 "cntlid": 131, 00:16:53.905 "qid": 0, 00:16:53.905 "state": "enabled", 00:16:53.905 "thread": "nvmf_tgt_poll_group_000", 00:16:53.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:53.905 "listen_address": { 00:16:53.905 "trtype": "TCP", 00:16:53.905 "adrfam": "IPv4", 00:16:53.905 "traddr": "10.0.0.2", 00:16:53.905 "trsvcid": "4420" 00:16:53.905 }, 00:16:53.905 "peer_address": { 00:16:53.905 "trtype": "TCP", 00:16:53.905 "adrfam": "IPv4", 00:16:53.905 "traddr": "10.0.0.1", 00:16:53.905 "trsvcid": "49260" 00:16:53.905 }, 00:16:53.905 "auth": { 00:16:53.905 "state": "completed", 00:16:53.905 "digest": "sha512", 00:16:53.905 "dhgroup": "ffdhe6144" 00:16:53.905 } 00:16:53.905 } 00:16:53.905 ]' 00:16:53.905 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.905 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.905 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.905 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.905 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.164 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.164 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.164 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.164 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:54.164 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:16:54.733 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.733 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:54.733 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.733 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.733 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.733 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.733 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.733 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:55.006 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:55.006 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.006 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.006 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:55.006 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:55.006 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.006 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.006 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.007 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.007 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.007 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.007 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.007 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.300 00:16:55.300 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.300 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.300 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.567 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.567 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.567 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.567 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.567 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.567 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.567 { 00:16:55.567 "cntlid": 133, 00:16:55.567 "qid": 0, 00:16:55.567 "state": "enabled", 00:16:55.567 "thread": "nvmf_tgt_poll_group_000", 00:16:55.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:55.567 "listen_address": { 00:16:55.567 "trtype": "TCP", 00:16:55.567 "adrfam": "IPv4", 00:16:55.567 "traddr": "10.0.0.2", 00:16:55.567 "trsvcid": "4420" 00:16:55.567 }, 00:16:55.567 "peer_address": { 00:16:55.567 "trtype": "TCP", 00:16:55.567 "adrfam": "IPv4", 00:16:55.567 "traddr": "10.0.0.1", 00:16:55.567 "trsvcid": "49298" 00:16:55.567 }, 00:16:55.567 "auth": { 00:16:55.567 "state": "completed", 00:16:55.567 "digest": "sha512", 00:16:55.567 "dhgroup": "ffdhe6144" 00:16:55.567 } 00:16:55.567 } 00:16:55.567 ]' 00:16:55.567 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.567 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.567 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.826 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.826 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.826 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.826 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.826 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.085 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:56.085 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:16:56.653 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.653 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:56.653 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.654 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.221 00:16:57.221 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.221 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.222 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.222 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.222 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.222 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.222 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.222 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.222 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.222 { 00:16:57.222 "cntlid": 135, 00:16:57.222 "qid": 0, 00:16:57.222 "state": "enabled", 00:16:57.222 "thread": "nvmf_tgt_poll_group_000", 00:16:57.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:57.222 "listen_address": { 00:16:57.222 "trtype": "TCP", 00:16:57.222 "adrfam": "IPv4", 00:16:57.222 "traddr": "10.0.0.2", 00:16:57.222 "trsvcid": "4420" 00:16:57.222 }, 00:16:57.222 "peer_address": { 00:16:57.222 "trtype": "TCP", 00:16:57.222 "adrfam": "IPv4", 00:16:57.222 "traddr": "10.0.0.1", 00:16:57.222 "trsvcid": "49322" 00:16:57.222 }, 00:16:57.222 "auth": { 00:16:57.222 "state": "completed", 00:16:57.222 "digest": "sha512", 00:16:57.222 "dhgroup": "ffdhe6144" 00:16:57.222 } 00:16:57.222 } 00:16:57.222 ]' 00:16:57.222 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.222 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.222 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.480 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.480 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.480 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.480 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.480 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.481 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:57.481 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:16:58.048 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.048 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.048 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.048 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.048 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.048 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.048 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.048 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.048 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.307 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:58.307 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.307 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.307 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.307 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.307 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.307 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.307 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.307 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.307 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.307 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.307 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.307 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.874 00:16:58.874 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.874 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.874 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.133 { 00:16:59.133 "cntlid": 137, 00:16:59.133 "qid": 0, 00:16:59.133 "state": "enabled", 00:16:59.133 "thread": "nvmf_tgt_poll_group_000", 00:16:59.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:59.133 "listen_address": { 00:16:59.133 "trtype": "TCP", 00:16:59.133 "adrfam": "IPv4", 00:16:59.133 "traddr": "10.0.0.2", 00:16:59.133 "trsvcid": "4420" 00:16:59.133 }, 00:16:59.133 "peer_address": { 00:16:59.133 "trtype": "TCP", 00:16:59.133 "adrfam": "IPv4", 00:16:59.133 "traddr": "10.0.0.1", 00:16:59.133 "trsvcid": "49348" 00:16:59.133 }, 00:16:59.133 "auth": { 00:16:59.133 "state": "completed", 00:16:59.133 "digest": "sha512", 00:16:59.133 "dhgroup": "ffdhe8192" 00:16:59.133 } 00:16:59.133 } 00:16:59.133 ]' 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.133 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.392 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:59.392 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:16:59.960 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.960 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:59.960 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.960 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.960 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.960 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.960 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.960 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:00.220 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:00.220 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.220 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.220 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.220 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.220 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.220 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.220 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.220 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.220 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.220 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.220 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.220 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.789 00:17:00.789 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.789 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.789 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.789 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.789 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.789 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.789 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.789 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.789 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.789 { 00:17:00.789 "cntlid": 139, 00:17:00.789 "qid": 0, 00:17:00.789 "state": "enabled", 00:17:00.789 "thread": "nvmf_tgt_poll_group_000", 00:17:00.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:00.789 "listen_address": { 00:17:00.789 "trtype": "TCP", 00:17:00.789 "adrfam": "IPv4", 00:17:00.789 "traddr": "10.0.0.2", 00:17:00.789 "trsvcid": "4420" 00:17:00.789 }, 00:17:00.789 "peer_address": { 00:17:00.789 "trtype": "TCP", 00:17:00.789 "adrfam": "IPv4", 00:17:00.789 "traddr": "10.0.0.1", 00:17:00.789 "trsvcid": "57478" 00:17:00.789 }, 00:17:00.789 "auth": { 00:17:00.789 "state": "completed", 00:17:00.789 "digest": "sha512", 00:17:00.789 "dhgroup": "ffdhe8192" 00:17:00.789 } 00:17:00.789 } 00:17:00.789 ]' 00:17:00.789 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.789 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.789 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.047 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.047 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.047 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.048 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.048 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.306 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:17:01.306 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: --dhchap-ctrl-secret DHHC-1:02:YjYxZTczNzZmOGE0NjFjNmJjYWU4OGZiMGU3YTgwMWQ0YjViOGUzMjQ4MGYzZmNmgjk7Rw==: 00:17:01.873 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.873 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:01.873 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.873 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.873 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.873 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.874 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.874 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:02.134 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:02.134 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.134 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.134 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.134 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:02.134 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.134 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.134 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.134 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.134 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.134 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.134 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.134 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.393 00:17:02.393 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.393 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.393 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.652 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.652 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.652 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.652 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.652 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.652 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.652 { 00:17:02.652 "cntlid": 141, 00:17:02.652 "qid": 0, 00:17:02.652 "state": "enabled", 00:17:02.652 "thread": "nvmf_tgt_poll_group_000", 00:17:02.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:02.652 "listen_address": { 00:17:02.652 "trtype": "TCP", 00:17:02.652 "adrfam": "IPv4", 00:17:02.652 "traddr": "10.0.0.2", 00:17:02.652 "trsvcid": "4420" 00:17:02.652 }, 00:17:02.652 "peer_address": { 00:17:02.652 "trtype": "TCP", 00:17:02.652 "adrfam": "IPv4", 00:17:02.652 "traddr": "10.0.0.1", 00:17:02.652 "trsvcid": "57512" 00:17:02.652 }, 00:17:02.652 "auth": { 00:17:02.652 "state": "completed", 00:17:02.652 "digest": "sha512", 00:17:02.652 "dhgroup": "ffdhe8192" 00:17:02.652 } 00:17:02.652 } 00:17:02.652 ]' 00:17:02.652 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.652 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.652 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.911 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.911 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.911 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.911 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.911 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.169 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:17:03.169 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:01:YmZjMjllMGRiYzZiY2NmMGI2NDQxNWNjNzkxNjY2MDNlS7fY: 00:17:03.457 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.715 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.715 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.715 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.715 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.715 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.715 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:03.715 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:03.715 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:03.715 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.715 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.715 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.715 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.716 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.716 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:03.716 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.716 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.716 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.716 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.716 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.716 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.281 00:17:04.281 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.281 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.281 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.540 { 00:17:04.540 "cntlid": 143, 00:17:04.540 "qid": 0, 00:17:04.540 "state": "enabled", 00:17:04.540 "thread": "nvmf_tgt_poll_group_000", 00:17:04.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:04.540 "listen_address": { 00:17:04.540 "trtype": "TCP", 00:17:04.540 "adrfam": "IPv4", 00:17:04.540 "traddr": "10.0.0.2", 00:17:04.540 "trsvcid": "4420" 00:17:04.540 }, 00:17:04.540 "peer_address": { 00:17:04.540 "trtype": "TCP", 00:17:04.540 "adrfam": "IPv4", 00:17:04.540 "traddr": "10.0.0.1", 00:17:04.540 "trsvcid": "57540" 00:17:04.540 }, 00:17:04.540 "auth": { 00:17:04.540 "state": "completed", 00:17:04.540 "digest": "sha512", 00:17:04.540 "dhgroup": "ffdhe8192" 00:17:04.540 } 00:17:04.540 } 00:17:04.540 ]' 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.540 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.798 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:17:04.798 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:17:05.364 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.364 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.364 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.364 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.364 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.364 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:05.364 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:05.364 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:05.364 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.364 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.364 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.622 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:05.622 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.622 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.623 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.623 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.623 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.623 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.623 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.623 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.623 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.623 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.623 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.623 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.189 00:17:06.189 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.189 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.189 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.447 { 00:17:06.447 "cntlid": 145, 00:17:06.447 "qid": 0, 00:17:06.447 "state": "enabled", 00:17:06.447 "thread": "nvmf_tgt_poll_group_000", 00:17:06.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:06.447 "listen_address": { 00:17:06.447 "trtype": "TCP", 00:17:06.447 "adrfam": "IPv4", 00:17:06.447 "traddr": "10.0.0.2", 00:17:06.447 "trsvcid": "4420" 00:17:06.447 }, 00:17:06.447 "peer_address": { 00:17:06.447 "trtype": "TCP", 00:17:06.447 "adrfam": "IPv4", 00:17:06.447 "traddr": "10.0.0.1", 00:17:06.447 "trsvcid": "57576" 00:17:06.447 }, 00:17:06.447 "auth": { 00:17:06.447 "state": "completed", 00:17:06.447 "digest": "sha512", 00:17:06.447 "dhgroup": "ffdhe8192" 00:17:06.447 } 00:17:06.447 } 00:17:06.447 ]' 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.447 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.706 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:17:06.706 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzFhODZiMmMzYTMxZDdmNmRlOTYyZTJkYjZjNTJjZDBjOWQ4ODc0Yjg2ZmIyMjE1fzcHww==: --dhchap-ctrl-secret DHHC-1:03:ZTQ4MDkyMDg0ZmU1MzgyNTNhZGYxZDNjNDk0ZDZiZDE3ZjBlOWZkZjc0NTljNmIyZDc0ZjdjYTI4ZWYzMGIzMcC+9C0=: 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:07.274 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:07.842 request: 00:17:07.842 { 00:17:07.842 "name": "nvme0", 00:17:07.842 "trtype": "tcp", 00:17:07.842 "traddr": "10.0.0.2", 00:17:07.842 "adrfam": "ipv4", 00:17:07.842 "trsvcid": "4420", 00:17:07.842 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:07.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:07.842 "prchk_reftag": false, 00:17:07.842 "prchk_guard": false, 00:17:07.842 "hdgst": false, 00:17:07.842 "ddgst": false, 00:17:07.842 "dhchap_key": "key2", 00:17:07.842 "allow_unrecognized_csi": false, 00:17:07.842 "method": "bdev_nvme_attach_controller", 00:17:07.842 "req_id": 1 00:17:07.842 } 00:17:07.842 Got JSON-RPC error response 00:17:07.842 response: 00:17:07.842 { 00:17:07.842 "code": -5, 00:17:07.842 "message": "Input/output error" 00:17:07.842 } 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:07.842 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:08.099 request: 00:17:08.099 { 00:17:08.099 "name": "nvme0", 00:17:08.099 "trtype": "tcp", 00:17:08.099 "traddr": "10.0.0.2", 00:17:08.099 "adrfam": "ipv4", 00:17:08.099 "trsvcid": "4420", 00:17:08.099 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:08.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:08.099 "prchk_reftag": false, 00:17:08.099 "prchk_guard": false, 00:17:08.099 "hdgst": false, 00:17:08.099 "ddgst": false, 00:17:08.099 "dhchap_key": "key1", 00:17:08.099 "dhchap_ctrlr_key": "ckey2", 00:17:08.099 "allow_unrecognized_csi": false, 00:17:08.099 "method": "bdev_nvme_attach_controller", 00:17:08.099 "req_id": 1 00:17:08.099 } 00:17:08.099 Got JSON-RPC error response 00:17:08.099 response: 00:17:08.099 { 00:17:08.099 "code": -5, 00:17:08.099 "message": "Input/output error" 00:17:08.099 } 00:17:08.099 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:08.099 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.099 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.099 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.099 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.099 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.099 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.099 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.099 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:08.100 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.100 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.100 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.100 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.100 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.100 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.100 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.100 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.100 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.100 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.100 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.100 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.100 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.665 request: 00:17:08.665 { 00:17:08.665 "name": "nvme0", 00:17:08.665 "trtype": "tcp", 00:17:08.665 "traddr": "10.0.0.2", 00:17:08.665 "adrfam": "ipv4", 00:17:08.665 "trsvcid": "4420", 00:17:08.665 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:08.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:08.665 "prchk_reftag": false, 00:17:08.665 "prchk_guard": false, 00:17:08.665 "hdgst": false, 00:17:08.665 "ddgst": false, 00:17:08.665 "dhchap_key": "key1", 00:17:08.665 "dhchap_ctrlr_key": "ckey1", 00:17:08.665 "allow_unrecognized_csi": false, 00:17:08.665 "method": "bdev_nvme_attach_controller", 00:17:08.665 "req_id": 1 00:17:08.665 } 00:17:08.666 Got JSON-RPC error response 00:17:08.666 response: 00:17:08.666 { 00:17:08.666 "code": -5, 00:17:08.666 "message": "Input/output error" 00:17:08.666 } 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1900579 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1900579 ']' 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1900579 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1900579 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1900579' 00:17:08.666 killing process with pid 1900579 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1900579 00:17:08.666 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1900579 00:17:08.925 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:08.925 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:08.925 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.925 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.925 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1922559 00:17:08.925 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:08.925 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1922559 00:17:08.925 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1922559 ']' 00:17:08.925 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.925 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.925 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.925 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.925 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.182 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.182 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:09.182 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:09.182 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:09.182 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.182 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.182 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:09.182 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1922559 00:17:09.182 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1922559 ']' 00:17:09.182 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.182 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.183 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.183 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.183 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.441 null0 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Uwm 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.OdU ]] 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OdU 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.qsb 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Map ]] 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Map 00:17:09.441 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.l4b 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.mj4 ]] 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mj4 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.law 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.442 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.379 nvme0n1 00:17:10.379 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.379 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.379 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.379 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.637 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.637 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.637 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.637 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.637 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.637 { 00:17:10.637 "cntlid": 1, 00:17:10.637 "qid": 0, 00:17:10.637 "state": "enabled", 00:17:10.637 "thread": "nvmf_tgt_poll_group_000", 00:17:10.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:10.637 "listen_address": { 00:17:10.637 "trtype": "TCP", 00:17:10.637 "adrfam": "IPv4", 00:17:10.637 "traddr": "10.0.0.2", 00:17:10.637 "trsvcid": "4420" 00:17:10.637 }, 00:17:10.637 "peer_address": { 00:17:10.637 "trtype": "TCP", 00:17:10.637 "adrfam": "IPv4", 00:17:10.637 "traddr": "10.0.0.1", 00:17:10.637 "trsvcid": "38894" 00:17:10.637 }, 00:17:10.637 "auth": { 00:17:10.637 "state": "completed", 00:17:10.637 "digest": "sha512", 00:17:10.637 "dhgroup": "ffdhe8192" 00:17:10.637 } 00:17:10.637 } 00:17:10.637 ]' 00:17:10.637 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.637 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.637 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.637 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.637 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.637 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.637 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.637 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.895 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:17:10.896 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:17:11.463 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.464 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.464 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.464 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.464 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.464 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:11.464 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.464 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.464 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.464 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:11.464 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:11.722 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:11.722 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:11.722 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:11.722 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:11.722 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.722 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:11.722 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.722 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.722 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.722 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.722 request: 00:17:11.722 { 00:17:11.722 "name": "nvme0", 00:17:11.722 "trtype": "tcp", 00:17:11.722 "traddr": "10.0.0.2", 00:17:11.722 "adrfam": "ipv4", 00:17:11.722 "trsvcid": "4420", 00:17:11.722 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:11.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:11.722 "prchk_reftag": false, 00:17:11.722 "prchk_guard": false, 00:17:11.722 "hdgst": false, 00:17:11.722 "ddgst": false, 00:17:11.722 "dhchap_key": "key3", 00:17:11.722 "allow_unrecognized_csi": false, 00:17:11.722 "method": "bdev_nvme_attach_controller", 00:17:11.722 "req_id": 1 00:17:11.722 } 00:17:11.722 Got JSON-RPC error response 00:17:11.722 response: 00:17:11.722 { 00:17:11.722 "code": -5, 00:17:11.722 "message": "Input/output error" 00:17:11.722 } 00:17:11.982 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:11.982 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:11.982 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:11.982 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:11.982 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:11.982 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:11.982 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:11.982 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:11.982 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:11.982 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:11.982 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:11.982 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:11.982 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.982 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:11.982 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.982 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.982 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.982 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.241 request: 00:17:12.241 { 00:17:12.241 "name": "nvme0", 00:17:12.241 "trtype": "tcp", 00:17:12.241 "traddr": "10.0.0.2", 00:17:12.241 "adrfam": "ipv4", 00:17:12.241 "trsvcid": "4420", 00:17:12.241 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:12.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:12.241 "prchk_reftag": false, 00:17:12.241 "prchk_guard": false, 00:17:12.241 "hdgst": false, 00:17:12.241 "ddgst": false, 00:17:12.241 "dhchap_key": "key3", 00:17:12.241 "allow_unrecognized_csi": false, 00:17:12.241 "method": "bdev_nvme_attach_controller", 00:17:12.241 "req_id": 1 00:17:12.241 } 00:17:12.241 Got JSON-RPC error response 00:17:12.241 response: 00:17:12.241 { 00:17:12.241 "code": -5, 00:17:12.241 "message": "Input/output error" 00:17:12.241 } 00:17:12.241 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:12.241 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.241 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.241 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.241 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:12.241 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:12.241 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:12.241 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:12.241 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:12.241 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:12.500 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:12.759 request: 00:17:12.759 { 00:17:12.759 "name": "nvme0", 00:17:12.759 "trtype": "tcp", 00:17:12.759 "traddr": "10.0.0.2", 00:17:12.759 "adrfam": "ipv4", 00:17:12.759 "trsvcid": "4420", 00:17:12.759 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:12.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:12.759 "prchk_reftag": false, 00:17:12.759 "prchk_guard": false, 00:17:12.759 "hdgst": false, 00:17:12.759 "ddgst": false, 00:17:12.759 "dhchap_key": "key0", 00:17:12.759 "dhchap_ctrlr_key": "key1", 00:17:12.759 "allow_unrecognized_csi": false, 00:17:12.759 "method": "bdev_nvme_attach_controller", 00:17:12.759 "req_id": 1 00:17:12.759 } 00:17:12.759 Got JSON-RPC error response 00:17:12.759 response: 00:17:12.759 { 00:17:12.759 "code": -5, 00:17:12.759 "message": "Input/output error" 00:17:12.759 } 00:17:12.759 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:12.759 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.759 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.759 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.759 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:12.759 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:12.760 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:13.018 nvme0n1 00:17:13.018 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:13.018 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:13.018 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.277 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.277 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.277 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.536 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:13.536 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.536 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.536 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.536 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:13.536 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:13.536 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:14.474 nvme0n1 00:17:14.474 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:14.474 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.474 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:14.474 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.474 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:14.474 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.474 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.474 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.474 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:14.474 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:14.474 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.732 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.732 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:17:14.732 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: --dhchap-ctrl-secret DHHC-1:03:YTY5MjExYjQxOGQ0NTZlN2U5YjZhNDRiMmQxMjIzYWNhMjdiZmQxNGU3ZjA5NzJhOGIzNzFkZTU1OTVhOGRlZZ8d9OU=: 00:17:15.300 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:15.300 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:15.300 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:15.300 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:15.300 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:15.300 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:15.300 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:15.300 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.300 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.300 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:15.300 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:15.300 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:15.300 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:15.559 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.559 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:15.559 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.559 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:15.559 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:15.559 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:15.818 request: 00:17:15.818 { 00:17:15.818 "name": "nvme0", 00:17:15.818 "trtype": "tcp", 00:17:15.818 "traddr": "10.0.0.2", 00:17:15.818 "adrfam": "ipv4", 00:17:15.818 "trsvcid": "4420", 00:17:15.818 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:15.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:15.818 "prchk_reftag": false, 00:17:15.818 "prchk_guard": false, 00:17:15.818 "hdgst": false, 00:17:15.818 "ddgst": false, 00:17:15.818 "dhchap_key": "key1", 00:17:15.818 "allow_unrecognized_csi": false, 00:17:15.818 "method": "bdev_nvme_attach_controller", 00:17:15.818 "req_id": 1 00:17:15.818 } 00:17:15.818 Got JSON-RPC error response 00:17:15.818 response: 00:17:15.818 { 00:17:15.818 "code": -5, 00:17:15.818 "message": "Input/output error" 00:17:15.818 } 00:17:15.818 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:15.818 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.818 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.818 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.818 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:15.818 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:15.818 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:16.755 nvme0n1 00:17:16.755 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:16.755 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:16.755 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.755 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.755 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.755 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.015 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:17.015 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.015 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.015 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.015 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:17.015 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:17.015 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:17.274 nvme0n1 00:17:17.274 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:17.274 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:17.274 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.532 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.532 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.532 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: '' 2s 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: ]] 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Zjc5YzNmYTBiNTQ3NTgyZWNhYjUyOTFkMGUyOTk5OGPSUdOn: 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:17.791 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: 2s 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: ]] 00:17:19.696 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YjQ4MTgyZDU3ZmFhMDNjZTNjNmFjMmJlMTVmZDM4MjFiMWQ4NzZhYjE2MTIxMzdjYJxOhA==: 00:17:19.697 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:19.697 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.230 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.489 nvme0n1 00:17:22.489 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:22.489 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.489 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.489 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.489 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:22.490 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:23.057 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:23.057 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:23.057 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.316 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.316 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:23.316 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.316 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.316 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.316 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:23.316 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:23.576 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:24.143 request: 00:17:24.143 { 00:17:24.143 "name": "nvme0", 00:17:24.144 "dhchap_key": "key1", 00:17:24.144 "dhchap_ctrlr_key": "key3", 00:17:24.144 "method": "bdev_nvme_set_keys", 00:17:24.144 "req_id": 1 00:17:24.144 } 00:17:24.144 Got JSON-RPC error response 00:17:24.144 response: 00:17:24.144 { 00:17:24.144 "code": -13, 00:17:24.144 "message": "Permission denied" 00:17:24.144 } 00:17:24.144 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:24.144 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.144 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.144 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.144 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:24.144 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:24.144 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.402 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:24.402 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:25.339 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:25.339 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:25.339 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.598 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:25.598 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:25.598 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.598 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.598 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.598 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:25.598 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:25.598 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:26.165 nvme0n1 00:17:26.165 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:26.165 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.165 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.427 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.427 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:26.427 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:26.427 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:26.427 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:26.427 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.427 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:26.427 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.427 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:26.427 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:26.686 request: 00:17:26.686 { 00:17:26.686 "name": "nvme0", 00:17:26.686 "dhchap_key": "key2", 00:17:26.686 "dhchap_ctrlr_key": "key0", 00:17:26.686 "method": "bdev_nvme_set_keys", 00:17:26.686 "req_id": 1 00:17:26.686 } 00:17:26.686 Got JSON-RPC error response 00:17:26.686 response: 00:17:26.686 { 00:17:26.686 "code": -13, 00:17:26.686 "message": "Permission denied" 00:17:26.686 } 00:17:26.686 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:26.686 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:26.686 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:26.686 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:26.686 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:26.686 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:26.686 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.945 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:26.945 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:27.882 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:27.882 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:27.882 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1900765 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1900765 ']' 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1900765 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1900765 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1900765' 00:17:28.142 killing process with pid 1900765 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1900765 00:17:28.142 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1900765 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:28.712 rmmod nvme_tcp 00:17:28.712 rmmod nvme_fabrics 00:17:28.712 rmmod nvme_keyring 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1922559 ']' 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1922559 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1922559 ']' 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1922559 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1922559 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1922559' 00:17:28.712 killing process with pid 1922559 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1922559 00:17:28.712 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1922559 00:17:28.713 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:28.713 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:28.713 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:28.713 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:28.713 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:28.713 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:28.713 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:28.713 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.713 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:28.713 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.713 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.713 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.249 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:31.249 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Uwm /tmp/spdk.key-sha256.qsb /tmp/spdk.key-sha384.l4b /tmp/spdk.key-sha512.law /tmp/spdk.key-sha512.OdU /tmp/spdk.key-sha384.Map /tmp/spdk.key-sha256.mj4 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:31.249 00:17:31.249 real 2m31.677s 00:17:31.249 user 5m49.399s 00:17:31.249 sys 0m24.297s 00:17:31.249 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.249 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.249 ************************************ 00:17:31.249 END TEST nvmf_auth_target 00:17:31.249 ************************************ 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.249 ************************************ 00:17:31.249 START TEST nvmf_bdevio_no_huge 00:17:31.249 ************************************ 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:31.249 * Looking for test storage... 00:17:31.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:31.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.249 --rc genhtml_branch_coverage=1 00:17:31.249 --rc genhtml_function_coverage=1 00:17:31.249 --rc genhtml_legend=1 00:17:31.249 --rc geninfo_all_blocks=1 00:17:31.249 --rc geninfo_unexecuted_blocks=1 00:17:31.249 00:17:31.249 ' 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:31.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.249 --rc genhtml_branch_coverage=1 00:17:31.249 --rc genhtml_function_coverage=1 00:17:31.249 --rc genhtml_legend=1 00:17:31.249 --rc geninfo_all_blocks=1 00:17:31.249 --rc geninfo_unexecuted_blocks=1 00:17:31.249 00:17:31.249 ' 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:31.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.249 --rc genhtml_branch_coverage=1 00:17:31.249 --rc genhtml_function_coverage=1 00:17:31.249 --rc genhtml_legend=1 00:17:31.249 --rc geninfo_all_blocks=1 00:17:31.249 --rc geninfo_unexecuted_blocks=1 00:17:31.249 00:17:31.249 ' 00:17:31.249 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:31.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.249 --rc genhtml_branch_coverage=1 00:17:31.250 --rc genhtml_function_coverage=1 00:17:31.250 --rc genhtml_legend=1 00:17:31.250 --rc geninfo_all_blocks=1 00:17:31.250 --rc geninfo_unexecuted_blocks=1 00:17:31.250 00:17:31.250 ' 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:31.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:31.250 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:37.821 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:37.821 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:37.821 Found net devices under 0000:86:00.0: cvl_0_0 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:37.822 Found net devices under 0000:86:00.1: cvl_0_1 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.822 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:37.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:17:37.822 00:17:37.822 --- 10.0.0.2 ping statistics --- 00:17:37.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.822 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:17:37.822 00:17:37.822 --- 10.0.0.1 ping statistics --- 00:17:37.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.822 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1929444 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1929444 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1929444 ']' 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.822 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.822 [2024-11-20 16:19:08.284422] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:17:37.822 [2024-11-20 16:19:08.284465] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:37.822 [2024-11-20 16:19:08.368759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.822 [2024-11-20 16:19:08.415205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.822 [2024-11-20 16:19:08.415255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.822 [2024-11-20 16:19:08.415262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.822 [2024-11-20 16:19:08.415268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.822 [2024-11-20 16:19:08.415273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.822 [2024-11-20 16:19:08.416445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:37.822 [2024-11-20 16:19:08.416551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:37.822 [2024-11-20 16:19:08.416655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.822 [2024-11-20 16:19:08.416657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.082 [2024-11-20 16:19:09.181018] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.082 Malloc0 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.082 [2024-11-20 16:19:09.225280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:38.082 { 00:17:38.082 "params": { 00:17:38.082 "name": "Nvme$subsystem", 00:17:38.082 "trtype": "$TEST_TRANSPORT", 00:17:38.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:38.082 "adrfam": "ipv4", 00:17:38.082 "trsvcid": "$NVMF_PORT", 00:17:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:38.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:38.082 "hdgst": ${hdgst:-false}, 00:17:38.082 "ddgst": ${ddgst:-false} 00:17:38.082 }, 00:17:38.082 "method": "bdev_nvme_attach_controller" 00:17:38.082 } 00:17:38.082 EOF 00:17:38.082 )") 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:38.082 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:38.082 "params": { 00:17:38.082 "name": "Nvme1", 00:17:38.082 "trtype": "tcp", 00:17:38.082 "traddr": "10.0.0.2", 00:17:38.082 "adrfam": "ipv4", 00:17:38.082 "trsvcid": "4420", 00:17:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.082 "hdgst": false, 00:17:38.082 "ddgst": false 00:17:38.082 }, 00:17:38.082 "method": "bdev_nvme_attach_controller" 00:17:38.082 }' 00:17:38.082 [2024-11-20 16:19:09.275294] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:17:38.082 [2024-11-20 16:19:09.275339] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1929690 ] 00:17:38.341 [2024-11-20 16:19:09.351440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:38.341 [2024-11-20 16:19:09.399617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.341 [2024-11-20 16:19:09.399726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.341 [2024-11-20 16:19:09.399726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.341 I/O targets: 00:17:38.341 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:38.341 00:17:38.341 00:17:38.341 CUnit - A unit testing framework for C - Version 2.1-3 00:17:38.341 http://cunit.sourceforge.net/ 00:17:38.341 00:17:38.341 00:17:38.341 Suite: bdevio tests on: Nvme1n1 00:17:38.599 Test: blockdev write read block ...passed 00:17:38.599 Test: blockdev write zeroes read block ...passed 00:17:38.599 Test: blockdev write zeroes read no split ...passed 00:17:38.599 Test: blockdev write zeroes read split ...passed 00:17:38.599 Test: blockdev write zeroes read split partial ...passed 00:17:38.599 Test: blockdev reset ...[2024-11-20 16:19:09.687749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:38.599 [2024-11-20 16:19:09.687808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a6920 (9): Bad file descriptor 00:17:38.599 [2024-11-20 16:19:09.703074] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:38.599 passed 00:17:38.599 Test: blockdev write read 8 blocks ...passed 00:17:38.599 Test: blockdev write read size > 128k ...passed 00:17:38.599 Test: blockdev write read invalid size ...passed 00:17:38.599 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:38.599 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:38.599 Test: blockdev write read max offset ...passed 00:17:38.858 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:38.858 Test: blockdev writev readv 8 blocks ...passed 00:17:38.858 Test: blockdev writev readv 30 x 1block ...passed 00:17:38.858 Test: blockdev writev readv block ...passed 00:17:38.858 Test: blockdev writev readv size > 128k ...passed 00:17:38.858 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:38.858 Test: blockdev comparev and writev ...[2024-11-20 16:19:09.953826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.858 [2024-11-20 16:19:09.953854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.858 [2024-11-20 16:19:09.953868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.858 [2024-11-20 16:19:09.953876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.858 [2024-11-20 16:19:09.954099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.858 [2024-11-20 16:19:09.954109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:38.858 [2024-11-20 16:19:09.954120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.858 [2024-11-20 16:19:09.954128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:38.858 [2024-11-20 16:19:09.954375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.858 [2024-11-20 16:19:09.954385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:38.858 [2024-11-20 16:19:09.954397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.858 [2024-11-20 16:19:09.954403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:38.858 [2024-11-20 16:19:09.954627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.858 [2024-11-20 16:19:09.954637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:38.858 [2024-11-20 16:19:09.954648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.858 [2024-11-20 16:19:09.954656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:38.858 passed 00:17:38.858 Test: blockdev nvme passthru rw ...passed 00:17:38.858 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:19:10.036558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.858 [2024-11-20 16:19:10.036583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:38.858 [2024-11-20 16:19:10.036691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.858 [2024-11-20 16:19:10.036700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:38.858 [2024-11-20 16:19:10.036795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.858 [2024-11-20 16:19:10.036804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:38.858 [2024-11-20 16:19:10.036907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.858 [2024-11-20 16:19:10.036916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:38.858 passed 00:17:38.858 Test: blockdev nvme admin passthru ...passed 00:17:39.117 Test: blockdev copy ...passed 00:17:39.117 00:17:39.117 Run Summary: Type Total Ran Passed Failed Inactive 00:17:39.117 suites 1 1 n/a 0 0 00:17:39.117 tests 23 23 23 0 0 00:17:39.118 asserts 152 152 152 0 n/a 00:17:39.118 00:17:39.118 Elapsed time = 1.060 seconds 00:17:39.118 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.118 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.118 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:39.377 rmmod nvme_tcp 00:17:39.377 rmmod nvme_fabrics 00:17:39.377 rmmod nvme_keyring 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1929444 ']' 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1929444 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1929444 ']' 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1929444 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1929444 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1929444' 00:17:39.377 killing process with pid 1929444 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1929444 00:17:39.377 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1929444 00:17:39.636 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:39.636 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:39.636 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:39.636 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:39.636 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:39.637 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:39.637 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:39.637 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:39.637 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:39.637 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.637 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.637 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.248 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:42.248 00:17:42.248 real 0m10.792s 00:17:42.248 user 0m13.018s 00:17:42.248 sys 0m5.364s 00:17:42.248 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.248 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.248 ************************************ 00:17:42.248 END TEST nvmf_bdevio_no_huge 00:17:42.248 ************************************ 00:17:42.248 16:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:42.248 16:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:42.248 16:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.248 16:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:42.248 ************************************ 00:17:42.248 START TEST nvmf_tls 00:17:42.248 ************************************ 00:17:42.248 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:42.248 * Looking for test storage... 00:17:42.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.248 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:42.248 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:42.248 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:42.248 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:42.248 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:42.248 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:42.248 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:42.248 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:42.248 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:42.248 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:42.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.249 --rc genhtml_branch_coverage=1 00:17:42.249 --rc genhtml_function_coverage=1 00:17:42.249 --rc genhtml_legend=1 00:17:42.249 --rc geninfo_all_blocks=1 00:17:42.249 --rc geninfo_unexecuted_blocks=1 00:17:42.249 00:17:42.249 ' 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:42.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.249 --rc genhtml_branch_coverage=1 00:17:42.249 --rc genhtml_function_coverage=1 00:17:42.249 --rc genhtml_legend=1 00:17:42.249 --rc geninfo_all_blocks=1 00:17:42.249 --rc geninfo_unexecuted_blocks=1 00:17:42.249 00:17:42.249 ' 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:42.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.249 --rc genhtml_branch_coverage=1 00:17:42.249 --rc genhtml_function_coverage=1 00:17:42.249 --rc genhtml_legend=1 00:17:42.249 --rc geninfo_all_blocks=1 00:17:42.249 --rc geninfo_unexecuted_blocks=1 00:17:42.249 00:17:42.249 ' 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:42.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.249 --rc genhtml_branch_coverage=1 00:17:42.249 --rc genhtml_function_coverage=1 00:17:42.249 --rc genhtml_legend=1 00:17:42.249 --rc geninfo_all_blocks=1 00:17:42.249 --rc geninfo_unexecuted_blocks=1 00:17:42.249 00:17:42.249 ' 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:42.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:42.249 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.820 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:48.820 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:48.820 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:48.820 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:48.820 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:48.820 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:48.820 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:48.820 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:48.820 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:48.820 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:48.820 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:48.820 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:48.821 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:48.821 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:48.821 Found net devices under 0000:86:00.0: cvl_0_0 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:48.821 Found net devices under 0000:86:00.1: cvl_0_1 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:48.821 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:48.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:17:48.821 00:17:48.821 --- 10.0.0.2 ping statistics --- 00:17:48.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.821 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:17:48.821 00:17:48.821 --- 10.0.0.1 ping statistics --- 00:17:48.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.821 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.821 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1933379 00:17:48.822 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1933379 00:17:48.822 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:48.822 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1933379 ']' 00:17:48.822 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.822 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.822 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.822 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.822 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.822 [2024-11-20 16:19:19.143982] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:17:48.822 [2024-11-20 16:19:19.144029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.822 [2024-11-20 16:19:19.227546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.822 [2024-11-20 16:19:19.268433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.822 [2024-11-20 16:19:19.268468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.822 [2024-11-20 16:19:19.268475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.822 [2024-11-20 16:19:19.268480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.822 [2024-11-20 16:19:19.268485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.822 [2024-11-20 16:19:19.269054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.822 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.822 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:48.822 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.822 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.822 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.822 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.822 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:48.822 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:49.081 true 00:17:49.081 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.081 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:49.339 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:49.339 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:49.339 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:49.598 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.598 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:49.598 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:49.598 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:49.598 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:49.856 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.856 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:50.116 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:50.116 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:50.116 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:50.116 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:50.375 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:50.375 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:50.375 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:50.375 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:50.375 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:50.633 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:50.633 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:50.633 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:50.898 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:50.898 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:50.898 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:50.898 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:50.898 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:50.898 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:50.898 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:50.898 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:50.898 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:50.898 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:50.898 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.dz9KKvl5lC 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.PQZ1PaNT3B 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.dz9KKvl5lC 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.PQZ1PaNT3B 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:51.160 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:51.420 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.dz9KKvl5lC 00:17:51.420 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dz9KKvl5lC 00:17:51.420 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:51.678 [2024-11-20 16:19:22.808351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.679 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:51.937 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:52.196 [2024-11-20 16:19:23.173306] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:52.196 [2024-11-20 16:19:23.173543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.196 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:52.196 malloc0 00:17:52.196 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:52.454 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dz9KKvl5lC 00:17:52.713 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:52.713 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.dz9KKvl5lC 00:18:04.921 Initializing NVMe Controllers 00:18:04.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.921 Initialization complete. Launching workers. 00:18:04.921 ======================================================== 00:18:04.921 Latency(us) 00:18:04.921 Device Information : IOPS MiB/s Average min max 00:18:04.921 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16781.08 65.55 3813.91 848.83 5543.59 00:18:04.921 ======================================================== 00:18:04.921 Total : 16781.08 65.55 3813.91 848.83 5543.59 00:18:04.921 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dz9KKvl5lC 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dz9KKvl5lC 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1935812 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1935812 /var/tmp/bdevperf.sock 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1935812 ']' 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.921 [2024-11-20 16:19:34.084274] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:04.921 [2024-11-20 16:19:34.084321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1935812 ] 00:18:04.921 [2024-11-20 16:19:34.157933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.921 [2024-11-20 16:19:34.198495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dz9KKvl5lC 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.921 [2024-11-20 16:19:34.634415] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.921 TLSTESTn1 00:18:04.921 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:04.921 Running I/O for 10 seconds... 00:18:05.857 5357.00 IOPS, 20.93 MiB/s [2024-11-20T15:19:38.028Z] 5518.00 IOPS, 21.55 MiB/s [2024-11-20T15:19:38.964Z] 5476.67 IOPS, 21.39 MiB/s [2024-11-20T15:19:39.901Z] 5508.00 IOPS, 21.52 MiB/s [2024-11-20T15:19:40.839Z] 5537.80 IOPS, 21.63 MiB/s [2024-11-20T15:19:42.217Z] 5547.67 IOPS, 21.67 MiB/s [2024-11-20T15:19:43.154Z] 5549.43 IOPS, 21.68 MiB/s [2024-11-20T15:19:44.089Z] 5557.38 IOPS, 21.71 MiB/s [2024-11-20T15:19:45.026Z] 5547.33 IOPS, 21.67 MiB/s [2024-11-20T15:19:45.026Z] 5547.50 IOPS, 21.67 MiB/s 00:18:13.792 Latency(us) 00:18:13.792 [2024-11-20T15:19:45.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.792 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.792 Verification LBA range: start 0x0 length 0x2000 00:18:13.792 TLSTESTn1 : 10.02 5550.31 21.68 0.00 0.00 23026.88 6459.98 22469.49 00:18:13.792 [2024-11-20T15:19:45.026Z] =================================================================================================================== 00:18:13.792 [2024-11-20T15:19:45.026Z] Total : 5550.31 21.68 0.00 0.00 23026.88 6459.98 22469.49 00:18:13.792 { 00:18:13.792 "results": [ 00:18:13.792 { 00:18:13.792 "job": "TLSTESTn1", 00:18:13.792 "core_mask": "0x4", 00:18:13.792 "workload": "verify", 00:18:13.792 "status": "finished", 00:18:13.792 "verify_range": { 00:18:13.792 "start": 0, 00:18:13.792 "length": 8192 00:18:13.792 }, 00:18:13.792 "queue_depth": 128, 00:18:13.792 "io_size": 4096, 00:18:13.792 "runtime": 10.018006, 00:18:13.792 "iops": 5550.306118802484, 00:18:13.792 "mibps": 21.680883276572203, 00:18:13.792 "io_failed": 0, 00:18:13.792 "io_timeout": 0, 00:18:13.792 "avg_latency_us": 23026.883772869398, 00:18:13.792 "min_latency_us": 6459.977142857143, 00:18:13.792 "max_latency_us": 22469.485714285714 00:18:13.792 } 00:18:13.792 ], 00:18:13.792 "core_count": 1 00:18:13.792 } 00:18:13.792 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.792 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1935812 00:18:13.792 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1935812 ']' 00:18:13.792 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1935812 00:18:13.792 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.792 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.793 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1935812 00:18:13.793 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:13.793 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:13.793 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1935812' 00:18:13.793 killing process with pid 1935812 00:18:13.793 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1935812 00:18:13.793 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.793 00:18:13.793 Latency(us) 00:18:13.793 [2024-11-20T15:19:45.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.793 [2024-11-20T15:19:45.027Z] =================================================================================================================== 00:18:13.793 [2024-11-20T15:19:45.027Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.793 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1935812 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PQZ1PaNT3B 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PQZ1PaNT3B 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PQZ1PaNT3B 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PQZ1PaNT3B 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1937643 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1937643 /var/tmp/bdevperf.sock 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1937643 ']' 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.052 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.052 [2024-11-20 16:19:45.129850] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:14.052 [2024-11-20 16:19:45.129900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1937643 ] 00:18:14.052 [2024-11-20 16:19:45.198552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.052 [2024-11-20 16:19:45.235187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.311 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.311 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.311 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PQZ1PaNT3B 00:18:14.311 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.570 [2024-11-20 16:19:45.687213] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.570 [2024-11-20 16:19:45.697986] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:14.570 [2024-11-20 16:19:45.698632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d9170 (107): Transport endpoint is not connected 00:18:14.570 [2024-11-20 16:19:45.699625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d9170 (9): Bad file descriptor 00:18:14.570 [2024-11-20 16:19:45.700626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:14.570 [2024-11-20 16:19:45.700639] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:14.570 [2024-11-20 16:19:45.700646] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:14.570 [2024-11-20 16:19:45.700656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:14.570 request: 00:18:14.570 { 00:18:14.570 "name": "TLSTEST", 00:18:14.570 "trtype": "tcp", 00:18:14.570 "traddr": "10.0.0.2", 00:18:14.570 "adrfam": "ipv4", 00:18:14.570 "trsvcid": "4420", 00:18:14.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.570 "prchk_reftag": false, 00:18:14.570 "prchk_guard": false, 00:18:14.570 "hdgst": false, 00:18:14.570 "ddgst": false, 00:18:14.570 "psk": "key0", 00:18:14.570 "allow_unrecognized_csi": false, 00:18:14.570 "method": "bdev_nvme_attach_controller", 00:18:14.570 "req_id": 1 00:18:14.570 } 00:18:14.570 Got JSON-RPC error response 00:18:14.570 response: 00:18:14.570 { 00:18:14.570 "code": -5, 00:18:14.570 "message": "Input/output error" 00:18:14.570 } 00:18:14.570 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1937643 00:18:14.570 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1937643 ']' 00:18:14.570 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1937643 00:18:14.570 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:14.570 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.570 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1937643 00:18:14.570 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:14.570 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:14.570 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1937643' 00:18:14.571 killing process with pid 1937643 00:18:14.571 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1937643 00:18:14.571 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.571 00:18:14.571 Latency(us) 00:18:14.571 [2024-11-20T15:19:45.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.571 [2024-11-20T15:19:45.805Z] =================================================================================================================== 00:18:14.571 [2024-11-20T15:19:45.805Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.571 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1937643 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dz9KKvl5lC 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dz9KKvl5lC 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dz9KKvl5lC 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dz9KKvl5lC 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1937793 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1937793 /var/tmp/bdevperf.sock 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1937793 ']' 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.830 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.830 [2024-11-20 16:19:45.980197] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:14.830 [2024-11-20 16:19:45.980266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1937793 ] 00:18:14.830 [2024-11-20 16:19:46.054607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.089 [2024-11-20 16:19:46.094915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.089 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.089 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:15.089 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dz9KKvl5lC 00:18:15.348 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:15.348 [2024-11-20 16:19:46.542121] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.348 [2024-11-20 16:19:46.550071] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:15.348 [2024-11-20 16:19:46.550094] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:15.348 [2024-11-20 16:19:46.550119] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:15.348 [2024-11-20 16:19:46.550522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c0170 (107): Transport endpoint is not connected 00:18:15.348 [2024-11-20 16:19:46.551516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c0170 (9): Bad file descriptor 00:18:15.348 [2024-11-20 16:19:46.552517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:15.348 [2024-11-20 16:19:46.552526] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:15.348 [2024-11-20 16:19:46.552532] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:15.348 [2024-11-20 16:19:46.552542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:15.348 request: 00:18:15.348 { 00:18:15.348 "name": "TLSTEST", 00:18:15.348 "trtype": "tcp", 00:18:15.348 "traddr": "10.0.0.2", 00:18:15.348 "adrfam": "ipv4", 00:18:15.348 "trsvcid": "4420", 00:18:15.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.348 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:15.348 "prchk_reftag": false, 00:18:15.348 "prchk_guard": false, 00:18:15.348 "hdgst": false, 00:18:15.348 "ddgst": false, 00:18:15.348 "psk": "key0", 00:18:15.348 "allow_unrecognized_csi": false, 00:18:15.348 "method": "bdev_nvme_attach_controller", 00:18:15.348 "req_id": 1 00:18:15.348 } 00:18:15.348 Got JSON-RPC error response 00:18:15.348 response: 00:18:15.348 { 00:18:15.348 "code": -5, 00:18:15.348 "message": "Input/output error" 00:18:15.348 } 00:18:15.348 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1937793 00:18:15.348 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1937793 ']' 00:18:15.348 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1937793 00:18:15.349 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1937793 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1937793' 00:18:15.607 killing process with pid 1937793 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1937793 00:18:15.607 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.607 00:18:15.607 Latency(us) 00:18:15.607 [2024-11-20T15:19:46.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.607 [2024-11-20T15:19:46.841Z] =================================================================================================================== 00:18:15.607 [2024-11-20T15:19:46.841Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1937793 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dz9KKvl5lC 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dz9KKvl5lC 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dz9KKvl5lC 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dz9KKvl5lC 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1937895 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1937895 /var/tmp/bdevperf.sock 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1937895 ']' 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.607 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.607 [2024-11-20 16:19:46.832267] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:15.607 [2024-11-20 16:19:46.832319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1937895 ] 00:18:15.864 [2024-11-20 16:19:46.894513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.864 [2024-11-20 16:19:46.931509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.864 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.864 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:15.865 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dz9KKvl5lC 00:18:16.122 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:16.380 [2024-11-20 16:19:47.415179] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.380 [2024-11-20 16:19:47.422302] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:16.380 [2024-11-20 16:19:47.422324] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:16.380 [2024-11-20 16:19:47.422365] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:16.380 [2024-11-20 16:19:47.422632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117e170 (107): Transport endpoint is not connected 00:18:16.380 [2024-11-20 16:19:47.423625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117e170 (9): Bad file descriptor 00:18:16.381 [2024-11-20 16:19:47.424627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:16.381 [2024-11-20 16:19:47.424640] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:16.381 [2024-11-20 16:19:47.424647] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:16.381 [2024-11-20 16:19:47.424658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:16.381 request: 00:18:16.381 { 00:18:16.381 "name": "TLSTEST", 00:18:16.381 "trtype": "tcp", 00:18:16.381 "traddr": "10.0.0.2", 00:18:16.381 "adrfam": "ipv4", 00:18:16.381 "trsvcid": "4420", 00:18:16.381 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:16.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.381 "prchk_reftag": false, 00:18:16.381 "prchk_guard": false, 00:18:16.381 "hdgst": false, 00:18:16.381 "ddgst": false, 00:18:16.381 "psk": "key0", 00:18:16.381 "allow_unrecognized_csi": false, 00:18:16.381 "method": "bdev_nvme_attach_controller", 00:18:16.381 "req_id": 1 00:18:16.381 } 00:18:16.381 Got JSON-RPC error response 00:18:16.381 response: 00:18:16.381 { 00:18:16.381 "code": -5, 00:18:16.381 "message": "Input/output error" 00:18:16.381 } 00:18:16.381 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1937895 00:18:16.381 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1937895 ']' 00:18:16.381 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1937895 00:18:16.381 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:16.381 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.381 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1937895 00:18:16.381 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:16.381 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:16.381 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1937895' 00:18:16.381 killing process with pid 1937895 00:18:16.381 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1937895 00:18:16.381 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.381 00:18:16.381 Latency(us) 00:18:16.381 [2024-11-20T15:19:47.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.381 [2024-11-20T15:19:47.615Z] =================================================================================================================== 00:18:16.381 [2024-11-20T15:19:47.615Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:16.381 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1937895 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1938129 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1938129 /var/tmp/bdevperf.sock 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1938129 ']' 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.640 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.640 [2024-11-20 16:19:47.705606] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:16.640 [2024-11-20 16:19:47.705659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938129 ] 00:18:16.640 [2024-11-20 16:19:47.773926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.640 [2024-11-20 16:19:47.810408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.898 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.898 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:16.898 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:16.898 [2024-11-20 16:19:48.072950] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:16.898 [2024-11-20 16:19:48.072983] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:16.898 request: 00:18:16.898 { 00:18:16.898 "name": "key0", 00:18:16.898 "path": "", 00:18:16.898 "method": "keyring_file_add_key", 00:18:16.898 "req_id": 1 00:18:16.898 } 00:18:16.898 Got JSON-RPC error response 00:18:16.898 response: 00:18:16.898 { 00:18:16.898 "code": -1, 00:18:16.898 "message": "Operation not permitted" 00:18:16.898 } 00:18:16.898 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:17.155 [2024-11-20 16:19:48.269546] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.155 [2024-11-20 16:19:48.269579] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:17.155 request: 00:18:17.155 { 00:18:17.155 "name": "TLSTEST", 00:18:17.155 "trtype": "tcp", 00:18:17.155 "traddr": "10.0.0.2", 00:18:17.155 "adrfam": "ipv4", 00:18:17.155 "trsvcid": "4420", 00:18:17.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.155 "prchk_reftag": false, 00:18:17.155 "prchk_guard": false, 00:18:17.155 "hdgst": false, 00:18:17.155 "ddgst": false, 00:18:17.155 "psk": "key0", 00:18:17.155 "allow_unrecognized_csi": false, 00:18:17.155 "method": "bdev_nvme_attach_controller", 00:18:17.155 "req_id": 1 00:18:17.155 } 00:18:17.155 Got JSON-RPC error response 00:18:17.155 response: 00:18:17.155 { 00:18:17.155 "code": -126, 00:18:17.155 "message": "Required key not available" 00:18:17.155 } 00:18:17.155 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1938129 00:18:17.155 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1938129 ']' 00:18:17.155 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1938129 00:18:17.155 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:17.155 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.155 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1938129 00:18:17.155 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:17.155 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:17.155 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1938129' 00:18:17.155 killing process with pid 1938129 00:18:17.155 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1938129 00:18:17.155 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.155 00:18:17.155 Latency(us) 00:18:17.155 [2024-11-20T15:19:48.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.155 [2024-11-20T15:19:48.389Z] =================================================================================================================== 00:18:17.155 [2024-11-20T15:19:48.389Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:17.155 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1938129 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1933379 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1933379 ']' 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1933379 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1933379 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1933379' 00:18:17.413 killing process with pid 1933379 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1933379 00:18:17.413 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1933379 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Q6jjTIe8UG 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Q6jjTIe8UG 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1938319 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1938319 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:17.671 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1938319 ']' 00:18:17.672 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.672 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.672 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.672 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.672 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.672 [2024-11-20 16:19:48.841228] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:17.672 [2024-11-20 16:19:48.841283] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.931 [2024-11-20 16:19:48.919888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.931 [2024-11-20 16:19:48.957455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.931 [2024-11-20 16:19:48.957490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.931 [2024-11-20 16:19:48.957497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.931 [2024-11-20 16:19:48.957503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.931 [2024-11-20 16:19:48.957507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.931 [2024-11-20 16:19:48.958081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.931 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.931 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:17.931 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.931 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.931 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.931 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.931 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Q6jjTIe8UG 00:18:17.931 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Q6jjTIe8UG 00:18:17.931 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:18.189 [2024-11-20 16:19:49.269694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.189 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:18.447 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:18.447 [2024-11-20 16:19:49.638651] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:18.447 [2024-11-20 16:19:49.638851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.447 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:18.706 malloc0 00:18:18.706 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:18.965 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Q6jjTIe8UG 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q6jjTIe8UG 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Q6jjTIe8UG 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1938630 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1938630 /var/tmp/bdevperf.sock 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1938630 ']' 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.224 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.483 [2024-11-20 16:19:50.470999] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:19.483 [2024-11-20 16:19:50.471049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938630 ] 00:18:19.483 [2024-11-20 16:19:50.544706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.483 [2024-11-20 16:19:50.586415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.483 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.483 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:19.483 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q6jjTIe8UG 00:18:19.742 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:20.001 [2024-11-20 16:19:51.039091] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.001 TLSTESTn1 00:18:20.001 16:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:20.001 Running I/O for 10 seconds... 00:18:22.313 5141.00 IOPS, 20.08 MiB/s [2024-11-20T15:19:54.482Z] 5345.00 IOPS, 20.88 MiB/s [2024-11-20T15:19:55.418Z] 5415.67 IOPS, 21.15 MiB/s [2024-11-20T15:19:56.354Z] 5464.75 IOPS, 21.35 MiB/s [2024-11-20T15:19:57.291Z] 5489.80 IOPS, 21.44 MiB/s [2024-11-20T15:19:58.229Z] 5507.83 IOPS, 21.51 MiB/s [2024-11-20T15:19:59.608Z] 5518.71 IOPS, 21.56 MiB/s [2024-11-20T15:20:00.546Z] 5506.50 IOPS, 21.51 MiB/s [2024-11-20T15:20:01.482Z] 5496.44 IOPS, 21.47 MiB/s [2024-11-20T15:20:01.482Z] 5499.70 IOPS, 21.48 MiB/s 00:18:30.248 Latency(us) 00:18:30.248 [2024-11-20T15:20:01.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.248 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:30.248 Verification LBA range: start 0x0 length 0x2000 00:18:30.248 TLSTESTn1 : 10.01 5505.81 21.51 0.00 0.00 23215.02 5211.67 29210.33 00:18:30.248 [2024-11-20T15:20:01.482Z] =================================================================================================================== 00:18:30.248 [2024-11-20T15:20:01.482Z] Total : 5505.81 21.51 0.00 0.00 23215.02 5211.67 29210.33 00:18:30.248 { 00:18:30.248 "results": [ 00:18:30.248 { 00:18:30.248 "job": "TLSTESTn1", 00:18:30.248 "core_mask": "0x4", 00:18:30.248 "workload": "verify", 00:18:30.248 "status": "finished", 00:18:30.248 "verify_range": { 00:18:30.248 "start": 0, 00:18:30.248 "length": 8192 00:18:30.248 }, 00:18:30.248 "queue_depth": 128, 00:18:30.248 "io_size": 4096, 00:18:30.248 "runtime": 10.011973, 00:18:30.248 "iops": 5505.807896205873, 00:18:30.248 "mibps": 21.50706209455419, 00:18:30.248 "io_failed": 0, 00:18:30.248 "io_timeout": 0, 00:18:30.248 "avg_latency_us": 23215.020810017937, 00:18:30.248 "min_latency_us": 5211.672380952381, 00:18:30.248 "max_latency_us": 29210.33142857143 00:18:30.248 } 00:18:30.248 ], 00:18:30.248 "core_count": 1 00:18:30.248 } 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1938630 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1938630 ']' 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1938630 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1938630 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1938630' 00:18:30.248 killing process with pid 1938630 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1938630 00:18:30.248 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.248 00:18:30.248 Latency(us) 00:18:30.248 [2024-11-20T15:20:01.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.248 [2024-11-20T15:20:01.482Z] =================================================================================================================== 00:18:30.248 [2024-11-20T15:20:01.482Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1938630 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Q6jjTIe8UG 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q6jjTIe8UG 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q6jjTIe8UG 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q6jjTIe8UG 00:18:30.248 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Q6jjTIe8UG 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1940364 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1940364 /var/tmp/bdevperf.sock 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1940364 ']' 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.507 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.507 [2024-11-20 16:20:01.527336] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:30.507 [2024-11-20 16:20:01.527392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1940364 ] 00:18:30.507 [2024-11-20 16:20:01.604601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.507 [2024-11-20 16:20:01.643386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.766 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.766 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:30.766 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q6jjTIe8UG 00:18:30.766 [2024-11-20 16:20:01.906648] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Q6jjTIe8UG': 0100666 00:18:30.766 [2024-11-20 16:20:01.906677] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:30.766 request: 00:18:30.766 { 00:18:30.766 "name": "key0", 00:18:30.766 "path": "/tmp/tmp.Q6jjTIe8UG", 00:18:30.766 "method": "keyring_file_add_key", 00:18:30.766 "req_id": 1 00:18:30.766 } 00:18:30.766 Got JSON-RPC error response 00:18:30.766 response: 00:18:30.766 { 00:18:30.766 "code": -1, 00:18:30.766 "message": "Operation not permitted" 00:18:30.766 } 00:18:30.766 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:31.025 [2024-11-20 16:20:02.103238] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.025 [2024-11-20 16:20:02.103265] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:31.025 request: 00:18:31.025 { 00:18:31.025 "name": "TLSTEST", 00:18:31.025 "trtype": "tcp", 00:18:31.025 "traddr": "10.0.0.2", 00:18:31.025 "adrfam": "ipv4", 00:18:31.025 "trsvcid": "4420", 00:18:31.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.025 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.025 "prchk_reftag": false, 00:18:31.025 "prchk_guard": false, 00:18:31.025 "hdgst": false, 00:18:31.025 "ddgst": false, 00:18:31.025 "psk": "key0", 00:18:31.025 "allow_unrecognized_csi": false, 00:18:31.025 "method": "bdev_nvme_attach_controller", 00:18:31.025 "req_id": 1 00:18:31.025 } 00:18:31.025 Got JSON-RPC error response 00:18:31.025 response: 00:18:31.025 { 00:18:31.025 "code": -126, 00:18:31.025 "message": "Required key not available" 00:18:31.025 } 00:18:31.025 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1940364 00:18:31.025 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1940364 ']' 00:18:31.025 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1940364 00:18:31.025 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:31.025 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.025 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1940364 00:18:31.025 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:31.025 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:31.025 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1940364' 00:18:31.025 killing process with pid 1940364 00:18:31.025 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1940364 00:18:31.025 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.025 00:18:31.025 Latency(us) 00:18:31.025 [2024-11-20T15:20:02.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.025 [2024-11-20T15:20:02.259Z] =================================================================================================================== 00:18:31.025 [2024-11-20T15:20:02.259Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:31.025 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1940364 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1938319 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1938319 ']' 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1938319 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1938319 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1938319' 00:18:31.285 killing process with pid 1938319 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1938319 00:18:31.285 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1938319 00:18:31.544 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:31.544 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.544 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.544 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.544 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1940499 00:18:31.544 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:31.544 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1940499 00:18:31.544 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1940499 ']' 00:18:31.544 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.544 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.544 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.544 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.544 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.544 [2024-11-20 16:20:02.602077] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:31.544 [2024-11-20 16:20:02.602127] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.544 [2024-11-20 16:20:02.680772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.544 [2024-11-20 16:20:02.717312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.544 [2024-11-20 16:20:02.717352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.544 [2024-11-20 16:20:02.717360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.544 [2024-11-20 16:20:02.717365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.544 [2024-11-20 16:20:02.717371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.544 [2024-11-20 16:20:02.717901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Q6jjTIe8UG 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Q6jjTIe8UG 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Q6jjTIe8UG 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Q6jjTIe8UG 00:18:31.804 16:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:31.804 [2024-11-20 16:20:03.035046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.063 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:32.063 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:32.322 [2024-11-20 16:20:03.395970] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:32.322 [2024-11-20 16:20:03.396185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.322 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:32.580 malloc0 00:18:32.581 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:32.581 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Q6jjTIe8UG 00:18:32.838 [2024-11-20 16:20:03.945303] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Q6jjTIe8UG': 0100666 00:18:32.838 [2024-11-20 16:20:03.945328] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:32.838 request: 00:18:32.838 { 00:18:32.838 "name": "key0", 00:18:32.838 "path": "/tmp/tmp.Q6jjTIe8UG", 00:18:32.838 "method": "keyring_file_add_key", 00:18:32.838 "req_id": 1 00:18:32.838 } 00:18:32.838 Got JSON-RPC error response 00:18:32.838 response: 00:18:32.838 { 00:18:32.838 "code": -1, 00:18:32.838 "message": "Operation not permitted" 00:18:32.838 } 00:18:32.838 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.096 [2024-11-20 16:20:04.121792] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:33.097 [2024-11-20 16:20:04.121831] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:33.097 request: 00:18:33.097 { 00:18:33.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.097 "host": "nqn.2016-06.io.spdk:host1", 00:18:33.097 "psk": "key0", 00:18:33.097 "method": "nvmf_subsystem_add_host", 00:18:33.097 "req_id": 1 00:18:33.097 } 00:18:33.097 Got JSON-RPC error response 00:18:33.097 response: 00:18:33.097 { 00:18:33.097 "code": -32603, 00:18:33.097 "message": "Internal error" 00:18:33.097 } 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1940499 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1940499 ']' 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1940499 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1940499 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1940499' 00:18:33.097 killing process with pid 1940499 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1940499 00:18:33.097 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1940499 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Q6jjTIe8UG 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1940910 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1940910 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1940910 ']' 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.356 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.356 [2024-11-20 16:20:04.414065] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:33.356 [2024-11-20 16:20:04.414113] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.356 [2024-11-20 16:20:04.492417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.356 [2024-11-20 16:20:04.532731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.356 [2024-11-20 16:20:04.532768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.356 [2024-11-20 16:20:04.532775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.356 [2024-11-20 16:20:04.532781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.356 [2024-11-20 16:20:04.532786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.356 [2024-11-20 16:20:04.533352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.615 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.615 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:33.615 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:33.615 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:33.615 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.615 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.615 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Q6jjTIe8UG 00:18:33.615 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Q6jjTIe8UG 00:18:33.615 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:33.615 [2024-11-20 16:20:04.825931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.876 16:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:33.876 16:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:34.193 [2024-11-20 16:20:05.206921] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.193 [2024-11-20 16:20:05.207112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.193 16:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:34.517 malloc0 00:18:34.517 16:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:34.517 16:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Q6jjTIe8UG 00:18:34.775 16:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:35.035 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1941238 00:18:35.035 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.035 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.035 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1941238 /var/tmp/bdevperf.sock 00:18:35.035 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1941238 ']' 00:18:35.035 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.035 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.035 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.035 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.035 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.035 [2024-11-20 16:20:06.082867] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:35.035 [2024-11-20 16:20:06.082920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1941238 ] 00:18:35.035 [2024-11-20 16:20:06.158960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.035 [2024-11-20 16:20:06.198734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.294 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.294 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:35.294 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q6jjTIe8UG 00:18:35.294 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:35.553 [2024-11-20 16:20:06.659026] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.553 TLSTESTn1 00:18:35.553 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:35.812 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:35.812 "subsystems": [ 00:18:35.812 { 00:18:35.812 "subsystem": "keyring", 00:18:35.812 "config": [ 00:18:35.812 { 00:18:35.812 "method": "keyring_file_add_key", 00:18:35.812 "params": { 00:18:35.812 "name": "key0", 00:18:35.812 "path": "/tmp/tmp.Q6jjTIe8UG" 00:18:35.812 } 00:18:35.812 } 00:18:35.812 ] 00:18:35.812 }, 00:18:35.812 { 00:18:35.812 "subsystem": "iobuf", 00:18:35.812 "config": [ 00:18:35.812 { 00:18:35.812 "method": "iobuf_set_options", 00:18:35.812 "params": { 00:18:35.812 "small_pool_count": 8192, 00:18:35.812 "large_pool_count": 1024, 00:18:35.812 "small_bufsize": 8192, 00:18:35.812 "large_bufsize": 135168, 00:18:35.812 "enable_numa": false 00:18:35.812 } 00:18:35.812 } 00:18:35.812 ] 00:18:35.812 }, 00:18:35.812 { 00:18:35.812 "subsystem": "sock", 00:18:35.812 "config": [ 00:18:35.812 { 00:18:35.812 "method": "sock_set_default_impl", 00:18:35.812 "params": { 00:18:35.812 "impl_name": "posix" 00:18:35.812 } 00:18:35.812 }, 00:18:35.812 { 00:18:35.812 "method": "sock_impl_set_options", 00:18:35.812 "params": { 00:18:35.812 "impl_name": "ssl", 00:18:35.812 "recv_buf_size": 4096, 00:18:35.812 "send_buf_size": 4096, 00:18:35.812 "enable_recv_pipe": true, 00:18:35.812 "enable_quickack": false, 00:18:35.812 "enable_placement_id": 0, 00:18:35.812 "enable_zerocopy_send_server": true, 00:18:35.812 "enable_zerocopy_send_client": false, 00:18:35.812 "zerocopy_threshold": 0, 00:18:35.812 "tls_version": 0, 00:18:35.812 "enable_ktls": false 00:18:35.812 } 00:18:35.812 }, 00:18:35.812 { 00:18:35.812 "method": "sock_impl_set_options", 00:18:35.812 "params": { 00:18:35.812 "impl_name": "posix", 00:18:35.812 "recv_buf_size": 2097152, 00:18:35.812 "send_buf_size": 2097152, 00:18:35.812 "enable_recv_pipe": true, 00:18:35.812 "enable_quickack": false, 00:18:35.812 "enable_placement_id": 0, 00:18:35.812 "enable_zerocopy_send_server": true, 00:18:35.812 "enable_zerocopy_send_client": false, 00:18:35.812 "zerocopy_threshold": 0, 00:18:35.812 "tls_version": 0, 00:18:35.812 "enable_ktls": false 00:18:35.812 } 00:18:35.812 } 00:18:35.812 ] 00:18:35.812 }, 00:18:35.812 { 00:18:35.812 "subsystem": "vmd", 00:18:35.812 "config": [] 00:18:35.812 }, 00:18:35.812 { 00:18:35.812 "subsystem": "accel", 00:18:35.812 "config": [ 00:18:35.812 { 00:18:35.812 "method": "accel_set_options", 00:18:35.812 "params": { 00:18:35.812 "small_cache_size": 128, 00:18:35.812 "large_cache_size": 16, 00:18:35.812 "task_count": 2048, 00:18:35.812 "sequence_count": 2048, 00:18:35.812 "buf_count": 2048 00:18:35.812 } 00:18:35.812 } 00:18:35.812 ] 00:18:35.812 }, 00:18:35.812 { 00:18:35.812 "subsystem": "bdev", 00:18:35.812 "config": [ 00:18:35.812 { 00:18:35.812 "method": "bdev_set_options", 00:18:35.812 "params": { 00:18:35.812 "bdev_io_pool_size": 65535, 00:18:35.812 "bdev_io_cache_size": 256, 00:18:35.812 "bdev_auto_examine": true, 00:18:35.812 "iobuf_small_cache_size": 128, 00:18:35.812 "iobuf_large_cache_size": 16 00:18:35.812 } 00:18:35.812 }, 00:18:35.812 { 00:18:35.812 "method": "bdev_raid_set_options", 00:18:35.812 "params": { 00:18:35.812 "process_window_size_kb": 1024, 00:18:35.812 "process_max_bandwidth_mb_sec": 0 00:18:35.812 } 00:18:35.812 }, 00:18:35.812 { 00:18:35.812 "method": "bdev_iscsi_set_options", 00:18:35.812 "params": { 00:18:35.812 "timeout_sec": 30 00:18:35.812 } 00:18:35.812 }, 00:18:35.812 { 00:18:35.812 "method": "bdev_nvme_set_options", 00:18:35.812 "params": { 00:18:35.812 "action_on_timeout": "none", 00:18:35.812 "timeout_us": 0, 00:18:35.812 "timeout_admin_us": 0, 00:18:35.812 "keep_alive_timeout_ms": 10000, 00:18:35.812 "arbitration_burst": 0, 00:18:35.812 "low_priority_weight": 0, 00:18:35.812 "medium_priority_weight": 0, 00:18:35.812 "high_priority_weight": 0, 00:18:35.812 "nvme_adminq_poll_period_us": 10000, 00:18:35.812 "nvme_ioq_poll_period_us": 0, 00:18:35.812 "io_queue_requests": 0, 00:18:35.812 "delay_cmd_submit": true, 00:18:35.812 "transport_retry_count": 4, 00:18:35.812 "bdev_retry_count": 3, 00:18:35.812 "transport_ack_timeout": 0, 00:18:35.812 "ctrlr_loss_timeout_sec": 0, 00:18:35.812 "reconnect_delay_sec": 0, 00:18:35.812 "fast_io_fail_timeout_sec": 0, 00:18:35.812 "disable_auto_failback": false, 00:18:35.812 "generate_uuids": false, 00:18:35.812 "transport_tos": 0, 00:18:35.812 "nvme_error_stat": false, 00:18:35.812 "rdma_srq_size": 0, 00:18:35.812 "io_path_stat": false, 00:18:35.812 "allow_accel_sequence": false, 00:18:35.812 "rdma_max_cq_size": 0, 00:18:35.812 "rdma_cm_event_timeout_ms": 0, 00:18:35.812 "dhchap_digests": [ 00:18:35.812 "sha256", 00:18:35.812 "sha384", 00:18:35.812 "sha512" 00:18:35.812 ], 00:18:35.812 "dhchap_dhgroups": [ 00:18:35.812 "null", 00:18:35.812 "ffdhe2048", 00:18:35.812 "ffdhe3072", 00:18:35.812 "ffdhe4096", 00:18:35.812 "ffdhe6144", 00:18:35.812 "ffdhe8192" 00:18:35.812 ] 00:18:35.812 } 00:18:35.812 }, 00:18:35.812 { 00:18:35.812 "method": "bdev_nvme_set_hotplug", 00:18:35.812 "params": { 00:18:35.812 "period_us": 100000, 00:18:35.812 "enable": false 00:18:35.812 } 00:18:35.812 }, 00:18:35.812 { 00:18:35.812 "method": "bdev_malloc_create", 00:18:35.812 "params": { 00:18:35.812 "name": "malloc0", 00:18:35.812 "num_blocks": 8192, 00:18:35.812 "block_size": 4096, 00:18:35.812 "physical_block_size": 4096, 00:18:35.812 "uuid": "76311913-da4e-4a32-a180-773865bc3c3a", 00:18:35.812 "optimal_io_boundary": 0, 00:18:35.812 "md_size": 0, 00:18:35.812 "dif_type": 0, 00:18:35.812 "dif_is_head_of_md": false, 00:18:35.812 "dif_pi_format": 0 00:18:35.812 } 00:18:35.812 }, 00:18:35.812 { 00:18:35.812 "method": "bdev_wait_for_examine" 00:18:35.812 } 00:18:35.812 ] 00:18:35.812 }, 00:18:35.812 { 00:18:35.813 "subsystem": "nbd", 00:18:35.813 "config": [] 00:18:35.813 }, 00:18:35.813 { 00:18:35.813 "subsystem": "scheduler", 00:18:35.813 "config": [ 00:18:35.813 { 00:18:35.813 "method": "framework_set_scheduler", 00:18:35.813 "params": { 00:18:35.813 "name": "static" 00:18:35.813 } 00:18:35.813 } 00:18:35.813 ] 00:18:35.813 }, 00:18:35.813 { 00:18:35.813 "subsystem": "nvmf", 00:18:35.813 "config": [ 00:18:35.813 { 00:18:35.813 "method": "nvmf_set_config", 00:18:35.813 "params": { 00:18:35.813 "discovery_filter": "match_any", 00:18:35.813 "admin_cmd_passthru": { 00:18:35.813 "identify_ctrlr": false 00:18:35.813 }, 00:18:35.813 "dhchap_digests": [ 00:18:35.813 "sha256", 00:18:35.813 "sha384", 00:18:35.813 "sha512" 00:18:35.813 ], 00:18:35.813 "dhchap_dhgroups": [ 00:18:35.813 "null", 00:18:35.813 "ffdhe2048", 00:18:35.813 "ffdhe3072", 00:18:35.813 "ffdhe4096", 00:18:35.813 "ffdhe6144", 00:18:35.813 "ffdhe8192" 00:18:35.813 ] 00:18:35.813 } 00:18:35.813 }, 00:18:35.813 { 00:18:35.813 "method": "nvmf_set_max_subsystems", 00:18:35.813 "params": { 00:18:35.813 "max_subsystems": 1024 00:18:35.813 } 00:18:35.813 }, 00:18:35.813 { 00:18:35.813 "method": "nvmf_set_crdt", 00:18:35.813 "params": { 00:18:35.813 "crdt1": 0, 00:18:35.813 "crdt2": 0, 00:18:35.813 "crdt3": 0 00:18:35.813 } 00:18:35.813 }, 00:18:35.813 { 00:18:35.813 "method": "nvmf_create_transport", 00:18:35.813 "params": { 00:18:35.813 "trtype": "TCP", 00:18:35.813 "max_queue_depth": 128, 00:18:35.813 "max_io_qpairs_per_ctrlr": 127, 00:18:35.813 "in_capsule_data_size": 4096, 00:18:35.813 "max_io_size": 131072, 00:18:35.813 "io_unit_size": 131072, 00:18:35.813 "max_aq_depth": 128, 00:18:35.813 "num_shared_buffers": 511, 00:18:35.813 "buf_cache_size": 4294967295, 00:18:35.813 "dif_insert_or_strip": false, 00:18:35.813 "zcopy": false, 00:18:35.813 "c2h_success": false, 00:18:35.813 "sock_priority": 0, 00:18:35.813 "abort_timeout_sec": 1, 00:18:35.813 "ack_timeout": 0, 00:18:35.813 "data_wr_pool_size": 0 00:18:35.813 } 00:18:35.813 }, 00:18:35.813 { 00:18:35.813 "method": "nvmf_create_subsystem", 00:18:35.813 "params": { 00:18:35.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.813 "allow_any_host": false, 00:18:35.813 "serial_number": "SPDK00000000000001", 00:18:35.813 "model_number": "SPDK bdev Controller", 00:18:35.813 "max_namespaces": 10, 00:18:35.813 "min_cntlid": 1, 00:18:35.813 "max_cntlid": 65519, 00:18:35.813 "ana_reporting": false 00:18:35.813 } 00:18:35.813 }, 00:18:35.813 { 00:18:35.813 "method": "nvmf_subsystem_add_host", 00:18:35.813 "params": { 00:18:35.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.813 "host": "nqn.2016-06.io.spdk:host1", 00:18:35.813 "psk": "key0" 00:18:35.813 } 00:18:35.813 }, 00:18:35.813 { 00:18:35.813 "method": "nvmf_subsystem_add_ns", 00:18:35.813 "params": { 00:18:35.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.813 "namespace": { 00:18:35.813 "nsid": 1, 00:18:35.813 "bdev_name": "malloc0", 00:18:35.813 "nguid": "76311913DA4E4A32A180773865BC3C3A", 00:18:35.813 "uuid": "76311913-da4e-4a32-a180-773865bc3c3a", 00:18:35.813 "no_auto_visible": false 00:18:35.813 } 00:18:35.813 } 00:18:35.813 }, 00:18:35.813 { 00:18:35.813 "method": "nvmf_subsystem_add_listener", 00:18:35.813 "params": { 00:18:35.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.813 "listen_address": { 00:18:35.813 "trtype": "TCP", 00:18:35.813 "adrfam": "IPv4", 00:18:35.813 "traddr": "10.0.0.2", 00:18:35.813 "trsvcid": "4420" 00:18:35.813 }, 00:18:35.813 "secure_channel": true 00:18:35.813 } 00:18:35.813 } 00:18:35.813 ] 00:18:35.813 } 00:18:35.813 ] 00:18:35.813 }' 00:18:35.813 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:36.073 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:36.073 "subsystems": [ 00:18:36.073 { 00:18:36.073 "subsystem": "keyring", 00:18:36.073 "config": [ 00:18:36.073 { 00:18:36.073 "method": "keyring_file_add_key", 00:18:36.073 "params": { 00:18:36.073 "name": "key0", 00:18:36.073 "path": "/tmp/tmp.Q6jjTIe8UG" 00:18:36.073 } 00:18:36.073 } 00:18:36.073 ] 00:18:36.073 }, 00:18:36.073 { 00:18:36.073 "subsystem": "iobuf", 00:18:36.073 "config": [ 00:18:36.073 { 00:18:36.073 "method": "iobuf_set_options", 00:18:36.073 "params": { 00:18:36.073 "small_pool_count": 8192, 00:18:36.073 "large_pool_count": 1024, 00:18:36.073 "small_bufsize": 8192, 00:18:36.073 "large_bufsize": 135168, 00:18:36.073 "enable_numa": false 00:18:36.073 } 00:18:36.073 } 00:18:36.073 ] 00:18:36.073 }, 00:18:36.073 { 00:18:36.073 "subsystem": "sock", 00:18:36.073 "config": [ 00:18:36.073 { 00:18:36.073 "method": "sock_set_default_impl", 00:18:36.073 "params": { 00:18:36.073 "impl_name": "posix" 00:18:36.073 } 00:18:36.073 }, 00:18:36.073 { 00:18:36.073 "method": "sock_impl_set_options", 00:18:36.073 "params": { 00:18:36.073 "impl_name": "ssl", 00:18:36.073 "recv_buf_size": 4096, 00:18:36.073 "send_buf_size": 4096, 00:18:36.073 "enable_recv_pipe": true, 00:18:36.073 "enable_quickack": false, 00:18:36.073 "enable_placement_id": 0, 00:18:36.073 "enable_zerocopy_send_server": true, 00:18:36.073 "enable_zerocopy_send_client": false, 00:18:36.073 "zerocopy_threshold": 0, 00:18:36.073 "tls_version": 0, 00:18:36.073 "enable_ktls": false 00:18:36.073 } 00:18:36.073 }, 00:18:36.073 { 00:18:36.073 "method": "sock_impl_set_options", 00:18:36.073 "params": { 00:18:36.073 "impl_name": "posix", 00:18:36.073 "recv_buf_size": 2097152, 00:18:36.073 "send_buf_size": 2097152, 00:18:36.073 "enable_recv_pipe": true, 00:18:36.073 "enable_quickack": false, 00:18:36.073 "enable_placement_id": 0, 00:18:36.073 "enable_zerocopy_send_server": true, 00:18:36.073 "enable_zerocopy_send_client": false, 00:18:36.073 "zerocopy_threshold": 0, 00:18:36.073 "tls_version": 0, 00:18:36.073 "enable_ktls": false 00:18:36.073 } 00:18:36.073 } 00:18:36.073 ] 00:18:36.073 }, 00:18:36.073 { 00:18:36.073 "subsystem": "vmd", 00:18:36.073 "config": [] 00:18:36.073 }, 00:18:36.073 { 00:18:36.073 "subsystem": "accel", 00:18:36.073 "config": [ 00:18:36.073 { 00:18:36.073 "method": "accel_set_options", 00:18:36.073 "params": { 00:18:36.073 "small_cache_size": 128, 00:18:36.073 "large_cache_size": 16, 00:18:36.073 "task_count": 2048, 00:18:36.073 "sequence_count": 2048, 00:18:36.073 "buf_count": 2048 00:18:36.073 } 00:18:36.073 } 00:18:36.073 ] 00:18:36.073 }, 00:18:36.073 { 00:18:36.073 "subsystem": "bdev", 00:18:36.073 "config": [ 00:18:36.073 { 00:18:36.073 "method": "bdev_set_options", 00:18:36.073 "params": { 00:18:36.073 "bdev_io_pool_size": 65535, 00:18:36.073 "bdev_io_cache_size": 256, 00:18:36.073 "bdev_auto_examine": true, 00:18:36.073 "iobuf_small_cache_size": 128, 00:18:36.073 "iobuf_large_cache_size": 16 00:18:36.073 } 00:18:36.073 }, 00:18:36.073 { 00:18:36.073 "method": "bdev_raid_set_options", 00:18:36.073 "params": { 00:18:36.073 "process_window_size_kb": 1024, 00:18:36.073 "process_max_bandwidth_mb_sec": 0 00:18:36.073 } 00:18:36.073 }, 00:18:36.073 { 00:18:36.073 "method": "bdev_iscsi_set_options", 00:18:36.073 "params": { 00:18:36.073 "timeout_sec": 30 00:18:36.073 } 00:18:36.073 }, 00:18:36.073 { 00:18:36.073 "method": "bdev_nvme_set_options", 00:18:36.073 "params": { 00:18:36.073 "action_on_timeout": "none", 00:18:36.073 "timeout_us": 0, 00:18:36.073 "timeout_admin_us": 0, 00:18:36.073 "keep_alive_timeout_ms": 10000, 00:18:36.073 "arbitration_burst": 0, 00:18:36.073 "low_priority_weight": 0, 00:18:36.073 "medium_priority_weight": 0, 00:18:36.073 "high_priority_weight": 0, 00:18:36.073 "nvme_adminq_poll_period_us": 10000, 00:18:36.073 "nvme_ioq_poll_period_us": 0, 00:18:36.073 "io_queue_requests": 512, 00:18:36.073 "delay_cmd_submit": true, 00:18:36.073 "transport_retry_count": 4, 00:18:36.073 "bdev_retry_count": 3, 00:18:36.073 "transport_ack_timeout": 0, 00:18:36.073 "ctrlr_loss_timeout_sec": 0, 00:18:36.073 "reconnect_delay_sec": 0, 00:18:36.073 "fast_io_fail_timeout_sec": 0, 00:18:36.073 "disable_auto_failback": false, 00:18:36.073 "generate_uuids": false, 00:18:36.073 "transport_tos": 0, 00:18:36.073 "nvme_error_stat": false, 00:18:36.073 "rdma_srq_size": 0, 00:18:36.073 "io_path_stat": false, 00:18:36.073 "allow_accel_sequence": false, 00:18:36.074 "rdma_max_cq_size": 0, 00:18:36.074 "rdma_cm_event_timeout_ms": 0, 00:18:36.074 "dhchap_digests": [ 00:18:36.074 "sha256", 00:18:36.074 "sha384", 00:18:36.074 "sha512" 00:18:36.074 ], 00:18:36.074 "dhchap_dhgroups": [ 00:18:36.074 "null", 00:18:36.074 "ffdhe2048", 00:18:36.074 "ffdhe3072", 00:18:36.074 "ffdhe4096", 00:18:36.074 "ffdhe6144", 00:18:36.074 "ffdhe8192" 00:18:36.074 ] 00:18:36.074 } 00:18:36.074 }, 00:18:36.074 { 00:18:36.074 "method": "bdev_nvme_attach_controller", 00:18:36.074 "params": { 00:18:36.074 "name": "TLSTEST", 00:18:36.074 "trtype": "TCP", 00:18:36.074 "adrfam": "IPv4", 00:18:36.074 "traddr": "10.0.0.2", 00:18:36.074 "trsvcid": "4420", 00:18:36.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.074 "prchk_reftag": false, 00:18:36.074 "prchk_guard": false, 00:18:36.074 "ctrlr_loss_timeout_sec": 0, 00:18:36.074 "reconnect_delay_sec": 0, 00:18:36.074 "fast_io_fail_timeout_sec": 0, 00:18:36.074 "psk": "key0", 00:18:36.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.074 "hdgst": false, 00:18:36.074 "ddgst": false, 00:18:36.074 "multipath": "multipath" 00:18:36.074 } 00:18:36.074 }, 00:18:36.074 { 00:18:36.074 "method": "bdev_nvme_set_hotplug", 00:18:36.074 "params": { 00:18:36.074 "period_us": 100000, 00:18:36.074 "enable": false 00:18:36.074 } 00:18:36.074 }, 00:18:36.074 { 00:18:36.074 "method": "bdev_wait_for_examine" 00:18:36.074 } 00:18:36.074 ] 00:18:36.074 }, 00:18:36.074 { 00:18:36.074 "subsystem": "nbd", 00:18:36.074 "config": [] 00:18:36.074 } 00:18:36.074 ] 00:18:36.074 }' 00:18:36.074 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1941238 00:18:36.074 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1941238 ']' 00:18:36.074 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1941238 00:18:36.074 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.074 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.074 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1941238 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1941238' 00:18:36.332 killing process with pid 1941238 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1941238 00:18:36.332 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.332 00:18:36.332 Latency(us) 00:18:36.332 [2024-11-20T15:20:07.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.332 [2024-11-20T15:20:07.566Z] =================================================================================================================== 00:18:36.332 [2024-11-20T15:20:07.566Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1941238 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1940910 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1940910 ']' 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1940910 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1940910 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1940910' 00:18:36.332 killing process with pid 1940910 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1940910 00:18:36.332 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1940910 00:18:36.591 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:36.591 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:36.591 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.591 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:36.591 "subsystems": [ 00:18:36.591 { 00:18:36.591 "subsystem": "keyring", 00:18:36.591 "config": [ 00:18:36.591 { 00:18:36.591 "method": "keyring_file_add_key", 00:18:36.591 "params": { 00:18:36.591 "name": "key0", 00:18:36.591 "path": "/tmp/tmp.Q6jjTIe8UG" 00:18:36.591 } 00:18:36.591 } 00:18:36.591 ] 00:18:36.591 }, 00:18:36.591 { 00:18:36.591 "subsystem": "iobuf", 00:18:36.591 "config": [ 00:18:36.591 { 00:18:36.591 "method": "iobuf_set_options", 00:18:36.591 "params": { 00:18:36.591 "small_pool_count": 8192, 00:18:36.591 "large_pool_count": 1024, 00:18:36.591 "small_bufsize": 8192, 00:18:36.591 "large_bufsize": 135168, 00:18:36.591 "enable_numa": false 00:18:36.591 } 00:18:36.591 } 00:18:36.591 ] 00:18:36.591 }, 00:18:36.591 { 00:18:36.591 "subsystem": "sock", 00:18:36.592 "config": [ 00:18:36.592 { 00:18:36.592 "method": "sock_set_default_impl", 00:18:36.592 "params": { 00:18:36.592 "impl_name": "posix" 00:18:36.592 } 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "method": "sock_impl_set_options", 00:18:36.592 "params": { 00:18:36.592 "impl_name": "ssl", 00:18:36.592 "recv_buf_size": 4096, 00:18:36.592 "send_buf_size": 4096, 00:18:36.592 "enable_recv_pipe": true, 00:18:36.592 "enable_quickack": false, 00:18:36.592 "enable_placement_id": 0, 00:18:36.592 "enable_zerocopy_send_server": true, 00:18:36.592 "enable_zerocopy_send_client": false, 00:18:36.592 "zerocopy_threshold": 0, 00:18:36.592 "tls_version": 0, 00:18:36.592 "enable_ktls": false 00:18:36.592 } 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "method": "sock_impl_set_options", 00:18:36.592 "params": { 00:18:36.592 "impl_name": "posix", 00:18:36.592 "recv_buf_size": 2097152, 00:18:36.592 "send_buf_size": 2097152, 00:18:36.592 "enable_recv_pipe": true, 00:18:36.592 "enable_quickack": false, 00:18:36.592 "enable_placement_id": 0, 00:18:36.592 "enable_zerocopy_send_server": true, 00:18:36.592 "enable_zerocopy_send_client": false, 00:18:36.592 "zerocopy_threshold": 0, 00:18:36.592 "tls_version": 0, 00:18:36.592 "enable_ktls": false 00:18:36.592 } 00:18:36.592 } 00:18:36.592 ] 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "subsystem": "vmd", 00:18:36.592 "config": [] 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "subsystem": "accel", 00:18:36.592 "config": [ 00:18:36.592 { 00:18:36.592 "method": "accel_set_options", 00:18:36.592 "params": { 00:18:36.592 "small_cache_size": 128, 00:18:36.592 "large_cache_size": 16, 00:18:36.592 "task_count": 2048, 00:18:36.592 "sequence_count": 2048, 00:18:36.592 "buf_count": 2048 00:18:36.592 } 00:18:36.592 } 00:18:36.592 ] 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "subsystem": "bdev", 00:18:36.592 "config": [ 00:18:36.592 { 00:18:36.592 "method": "bdev_set_options", 00:18:36.592 "params": { 00:18:36.592 "bdev_io_pool_size": 65535, 00:18:36.592 "bdev_io_cache_size": 256, 00:18:36.592 "bdev_auto_examine": true, 00:18:36.592 "iobuf_small_cache_size": 128, 00:18:36.592 "iobuf_large_cache_size": 16 00:18:36.592 } 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "method": "bdev_raid_set_options", 00:18:36.592 "params": { 00:18:36.592 "process_window_size_kb": 1024, 00:18:36.592 "process_max_bandwidth_mb_sec": 0 00:18:36.592 } 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "method": "bdev_iscsi_set_options", 00:18:36.592 "params": { 00:18:36.592 "timeout_sec": 30 00:18:36.592 } 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "method": "bdev_nvme_set_options", 00:18:36.592 "params": { 00:18:36.592 "action_on_timeout": "none", 00:18:36.592 "timeout_us": 0, 00:18:36.592 "timeout_admin_us": 0, 00:18:36.592 "keep_alive_timeout_ms": 10000, 00:18:36.592 "arbitration_burst": 0, 00:18:36.592 "low_priority_weight": 0, 00:18:36.592 "medium_priority_weight": 0, 00:18:36.592 "high_priority_weight": 0, 00:18:36.592 "nvme_adminq_poll_period_us": 10000, 00:18:36.592 "nvme_ioq_poll_period_us": 0, 00:18:36.592 "io_queue_requests": 0, 00:18:36.592 "delay_cmd_submit": true, 00:18:36.592 "transport_retry_count": 4, 00:18:36.592 "bdev_retry_count": 3, 00:18:36.592 "transport_ack_timeout": 0, 00:18:36.592 "ctrlr_loss_timeout_sec": 0, 00:18:36.592 "reconnect_delay_sec": 0, 00:18:36.592 "fast_io_fail_timeout_sec": 0, 00:18:36.592 "disable_auto_failback": false, 00:18:36.592 "generate_uuids": false, 00:18:36.592 "transport_tos": 0, 00:18:36.592 "nvme_error_stat": false, 00:18:36.592 "rdma_srq_size": 0, 00:18:36.592 "io_path_stat": false, 00:18:36.592 "allow_accel_sequence": false, 00:18:36.592 "rdma_max_cq_size": 0, 00:18:36.592 "rdma_cm_event_timeout_ms": 0, 00:18:36.592 "dhchap_digests": [ 00:18:36.592 "sha256", 00:18:36.592 "sha384", 00:18:36.592 "sha512" 00:18:36.592 ], 00:18:36.592 "dhchap_dhgroups": [ 00:18:36.592 "null", 00:18:36.592 "ffdhe2048", 00:18:36.592 "ffdhe3072", 00:18:36.592 "ffdhe4096", 00:18:36.592 "ffdhe6144", 00:18:36.592 "ffdhe8192" 00:18:36.592 ] 00:18:36.592 } 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "method": "bdev_nvme_set_hotplug", 00:18:36.592 "params": { 00:18:36.592 "period_us": 100000, 00:18:36.592 "enable": false 00:18:36.592 } 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "method": "bdev_malloc_create", 00:18:36.592 "params": { 00:18:36.592 "name": "malloc0", 00:18:36.592 "num_blocks": 8192, 00:18:36.592 "block_size": 4096, 00:18:36.592 "physical_block_size": 4096, 00:18:36.592 "uuid": "76311913-da4e-4a32-a180-773865bc3c3a", 00:18:36.592 "optimal_io_boundary": 0, 00:18:36.592 "md_size": 0, 00:18:36.592 "dif_type": 0, 00:18:36.592 "dif_is_head_of_md": false, 00:18:36.592 "dif_pi_format": 0 00:18:36.592 } 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "method": "bdev_wait_for_examine" 00:18:36.592 } 00:18:36.592 ] 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "subsystem": "nbd", 00:18:36.592 "config": [] 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "subsystem": "scheduler", 00:18:36.592 "config": [ 00:18:36.592 { 00:18:36.592 "method": "framework_set_scheduler", 00:18:36.592 "params": { 00:18:36.592 "name": "static" 00:18:36.592 } 00:18:36.592 } 00:18:36.592 ] 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "subsystem": "nvmf", 00:18:36.592 "config": [ 00:18:36.592 { 00:18:36.592 "method": "nvmf_set_config", 00:18:36.592 "params": { 00:18:36.592 "discovery_filter": "match_any", 00:18:36.592 "admin_cmd_passthru": { 00:18:36.592 "identify_ctrlr": false 00:18:36.592 }, 00:18:36.592 "dhchap_digests": [ 00:18:36.592 "sha256", 00:18:36.592 "sha384", 00:18:36.592 "sha512" 00:18:36.592 ], 00:18:36.592 "dhchap_dhgroups": [ 00:18:36.592 "null", 00:18:36.592 "ffdhe2048", 00:18:36.592 "ffdhe3072", 00:18:36.592 "ffdhe4096", 00:18:36.592 "ffdhe6144", 00:18:36.592 "ffdhe8192" 00:18:36.592 ] 00:18:36.592 } 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "method": "nvmf_set_max_subsystems", 00:18:36.592 "params": { 00:18:36.592 "max_subsystems": 1024 00:18:36.592 } 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "method": "nvmf_set_crdt", 00:18:36.592 "params": { 00:18:36.592 "crdt1": 0, 00:18:36.592 "crdt2": 0, 00:18:36.592 "crdt3": 0 00:18:36.592 } 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "method": "nvmf_create_transport", 00:18:36.592 "params": { 00:18:36.592 "trtype": "TCP", 00:18:36.592 "max_queue_depth": 128, 00:18:36.592 "max_io_qpairs_per_ctrlr": 127, 00:18:36.592 "in_capsule_data_size": 4096, 00:18:36.592 "max_io_size": 131072, 00:18:36.592 "io_unit_size": 131072, 00:18:36.592 "max_aq_depth": 128, 00:18:36.592 "num_shared_buffers": 511, 00:18:36.592 "buf_cache_size": 4294967295, 00:18:36.592 "dif_insert_or_strip": false, 00:18:36.592 "zcopy": false, 00:18:36.592 "c2h_success": false, 00:18:36.592 "sock_priority": 0, 00:18:36.592 "abort_timeout_sec": 1, 00:18:36.592 "ack_timeout": 0, 00:18:36.592 "data_wr_pool_size": 0 00:18:36.592 } 00:18:36.592 }, 00:18:36.592 { 00:18:36.592 "method": "nvmf_create_subsystem", 00:18:36.592 "params": { 00:18:36.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.592 "allow_any_host": false, 00:18:36.592 "serial_number": "SPDK00000000000001", 00:18:36.592 "model_number": "SPDK bdev Controller", 00:18:36.593 "max_namespaces": 10, 00:18:36.593 "min_cntlid": 1, 00:18:36.593 "max_cntlid": 65519, 00:18:36.593 "ana_reporting": false 00:18:36.593 } 00:18:36.593 }, 00:18:36.593 { 00:18:36.593 "method": "nvmf_subsystem_add_host", 00:18:36.593 "params": { 00:18:36.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.593 "host": "nqn.2016-06.io.spdk:host1", 00:18:36.593 "psk": "key0" 00:18:36.593 } 00:18:36.593 }, 00:18:36.593 { 00:18:36.593 "method": "nvmf_subsystem_add_ns", 00:18:36.593 "params": { 00:18:36.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.593 "namespace": { 00:18:36.593 "nsid": 1, 00:18:36.593 "bdev_name": "malloc0", 00:18:36.593 "nguid": "76311913DA4E4A32A180773865BC3C3A", 00:18:36.593 "uuid": "76311913-da4e-4a32-a180-773865bc3c3a", 00:18:36.593 "no_auto_visible": false 00:18:36.593 } 00:18:36.593 } 00:18:36.593 }, 00:18:36.593 { 00:18:36.593 "method": "nvmf_subsystem_add_listener", 00:18:36.593 "params": { 00:18:36.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.593 "listen_address": { 00:18:36.593 "trtype": "TCP", 00:18:36.593 "adrfam": "IPv4", 00:18:36.593 "traddr": "10.0.0.2", 00:18:36.593 "trsvcid": "4420" 00:18:36.593 }, 00:18:36.593 "secure_channel": true 00:18:36.593 } 00:18:36.593 } 00:18:36.593 ] 00:18:36.593 } 00:18:36.593 ] 00:18:36.593 }' 00:18:36.593 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.593 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1941491 00:18:36.593 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:36.593 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1941491 00:18:36.593 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1941491 ']' 00:18:36.593 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.593 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.593 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.593 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.593 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.593 [2024-11-20 16:20:07.764003] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:36.593 [2024-11-20 16:20:07.764047] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.852 [2024-11-20 16:20:07.839635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.852 [2024-11-20 16:20:07.879669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.852 [2024-11-20 16:20:07.879705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.852 [2024-11-20 16:20:07.879711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.852 [2024-11-20 16:20:07.879717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.852 [2024-11-20 16:20:07.879722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.852 [2024-11-20 16:20:07.880311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.111 [2024-11-20 16:20:08.093370] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.111 [2024-11-20 16:20:08.125400] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.111 [2024-11-20 16:20:08.125589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.369 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.369 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:37.369 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.369 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:37.369 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.629 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.629 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1941674 00:18:37.629 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1941674 /var/tmp/bdevperf.sock 00:18:37.629 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1941674 ']' 00:18:37.629 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.629 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:37.629 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.629 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.629 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:37.629 "subsystems": [ 00:18:37.629 { 00:18:37.629 "subsystem": "keyring", 00:18:37.629 "config": [ 00:18:37.629 { 00:18:37.629 "method": "keyring_file_add_key", 00:18:37.629 "params": { 00:18:37.629 "name": "key0", 00:18:37.629 "path": "/tmp/tmp.Q6jjTIe8UG" 00:18:37.629 } 00:18:37.629 } 00:18:37.629 ] 00:18:37.629 }, 00:18:37.629 { 00:18:37.629 "subsystem": "iobuf", 00:18:37.629 "config": [ 00:18:37.629 { 00:18:37.629 "method": "iobuf_set_options", 00:18:37.629 "params": { 00:18:37.629 "small_pool_count": 8192, 00:18:37.629 "large_pool_count": 1024, 00:18:37.629 "small_bufsize": 8192, 00:18:37.629 "large_bufsize": 135168, 00:18:37.629 "enable_numa": false 00:18:37.629 } 00:18:37.629 } 00:18:37.629 ] 00:18:37.629 }, 00:18:37.629 { 00:18:37.629 "subsystem": "sock", 00:18:37.629 "config": [ 00:18:37.629 { 00:18:37.629 "method": "sock_set_default_impl", 00:18:37.629 "params": { 00:18:37.629 "impl_name": "posix" 00:18:37.629 } 00:18:37.629 }, 00:18:37.629 { 00:18:37.629 "method": "sock_impl_set_options", 00:18:37.629 "params": { 00:18:37.629 "impl_name": "ssl", 00:18:37.629 "recv_buf_size": 4096, 00:18:37.629 "send_buf_size": 4096, 00:18:37.629 "enable_recv_pipe": true, 00:18:37.629 "enable_quickack": false, 00:18:37.629 "enable_placement_id": 0, 00:18:37.629 "enable_zerocopy_send_server": true, 00:18:37.629 "enable_zerocopy_send_client": false, 00:18:37.629 "zerocopy_threshold": 0, 00:18:37.629 "tls_version": 0, 00:18:37.629 "enable_ktls": false 00:18:37.629 } 00:18:37.629 }, 00:18:37.629 { 00:18:37.629 "method": "sock_impl_set_options", 00:18:37.629 "params": { 00:18:37.629 "impl_name": "posix", 00:18:37.629 "recv_buf_size": 2097152, 00:18:37.629 "send_buf_size": 2097152, 00:18:37.629 "enable_recv_pipe": true, 00:18:37.629 "enable_quickack": false, 00:18:37.629 "enable_placement_id": 0, 00:18:37.629 "enable_zerocopy_send_server": true, 00:18:37.629 "enable_zerocopy_send_client": false, 00:18:37.629 "zerocopy_threshold": 0, 00:18:37.629 "tls_version": 0, 00:18:37.629 "enable_ktls": false 00:18:37.629 } 00:18:37.629 } 00:18:37.629 ] 00:18:37.629 }, 00:18:37.629 { 00:18:37.629 "subsystem": "vmd", 00:18:37.629 "config": [] 00:18:37.629 }, 00:18:37.629 { 00:18:37.629 "subsystem": "accel", 00:18:37.629 "config": [ 00:18:37.629 { 00:18:37.629 "method": "accel_set_options", 00:18:37.629 "params": { 00:18:37.629 "small_cache_size": 128, 00:18:37.629 "large_cache_size": 16, 00:18:37.629 "task_count": 2048, 00:18:37.629 "sequence_count": 2048, 00:18:37.629 "buf_count": 2048 00:18:37.629 } 00:18:37.629 } 00:18:37.629 ] 00:18:37.629 }, 00:18:37.629 { 00:18:37.629 "subsystem": "bdev", 00:18:37.629 "config": [ 00:18:37.629 { 00:18:37.629 "method": "bdev_set_options", 00:18:37.629 "params": { 00:18:37.629 "bdev_io_pool_size": 65535, 00:18:37.629 "bdev_io_cache_size": 256, 00:18:37.630 "bdev_auto_examine": true, 00:18:37.630 "iobuf_small_cache_size": 128, 00:18:37.630 "iobuf_large_cache_size": 16 00:18:37.630 } 00:18:37.630 }, 00:18:37.630 { 00:18:37.630 "method": "bdev_raid_set_options", 00:18:37.630 "params": { 00:18:37.630 "process_window_size_kb": 1024, 00:18:37.630 "process_max_bandwidth_mb_sec": 0 00:18:37.630 } 00:18:37.630 }, 00:18:37.630 { 00:18:37.630 "method": "bdev_iscsi_set_options", 00:18:37.630 "params": { 00:18:37.630 "timeout_sec": 30 00:18:37.630 } 00:18:37.630 }, 00:18:37.630 { 00:18:37.630 "method": "bdev_nvme_set_options", 00:18:37.630 "params": { 00:18:37.630 "action_on_timeout": "none", 00:18:37.630 "timeout_us": 0, 00:18:37.630 "timeout_admin_us": 0, 00:18:37.630 "keep_alive_timeout_ms": 10000, 00:18:37.630 "arbitration_burst": 0, 00:18:37.630 "low_priority_weight": 0, 00:18:37.630 "medium_priority_weight": 0, 00:18:37.630 "high_priority_weight": 0, 00:18:37.630 "nvme_adminq_poll_period_us": 10000, 00:18:37.630 "nvme_ioq_poll_period_us": 0, 00:18:37.630 "io_queue_requests": 512, 00:18:37.630 "delay_cmd_submit": true, 00:18:37.630 "transport_retry_count": 4, 00:18:37.630 "bdev_retry_count": 3, 00:18:37.630 "transport_ack_timeout": 0, 00:18:37.630 "ctrlr_loss_timeout_sec": 0, 00:18:37.630 "reconnect_delay_sec": 0, 00:18:37.630 "fast_io_fail_timeout_sec": 0, 00:18:37.630 "disable_auto_failback": false, 00:18:37.630 "generate_uuids": false, 00:18:37.630 "transport_tos": 0, 00:18:37.630 "nvme_error_stat": false, 00:18:37.630 "rdma_srq_size": 0, 00:18:37.630 "io_path_stat": false, 00:18:37.630 "allow_accel_sequence": false, 00:18:37.630 "rdma_max_cq_size": 0, 00:18:37.630 "rdma_cm_event_timeout_ms": 0, 00:18:37.630 "dhchap_digests": [ 00:18:37.630 "sha256", 00:18:37.630 "sha384", 00:18:37.630 "sha512" 00:18:37.630 ], 00:18:37.630 "dhchap_dhgroups": [ 00:18:37.630 "null", 00:18:37.630 "ffdhe2048", 00:18:37.630 "ffdhe3072", 00:18:37.630 "ffdhe4096", 00:18:37.630 "ffdhe6144", 00:18:37.630 "ffdhe8192" 00:18:37.630 ] 00:18:37.630 } 00:18:37.630 }, 00:18:37.630 { 00:18:37.630 "method": "bdev_nvme_attach_controller", 00:18:37.630 "params": { 00:18:37.630 "name": "TLSTEST", 00:18:37.630 "trtype": "TCP", 00:18:37.630 "adrfam": "IPv4", 00:18:37.630 "traddr": "10.0.0.2", 00:18:37.630 "trsvcid": "4420", 00:18:37.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.630 "prchk_reftag": false, 00:18:37.630 "prchk_guard": false, 00:18:37.630 "ctrlr_loss_timeout_sec": 0, 00:18:37.630 "reconnect_delay_sec": 0, 00:18:37.630 "fast_io_fail_timeout_sec": 0, 00:18:37.630 "psk": "key0", 00:18:37.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.630 "hdgst": false, 00:18:37.630 "ddgst": false, 00:18:37.630 "multipath": "multipath" 00:18:37.630 } 00:18:37.630 }, 00:18:37.630 { 00:18:37.630 "method": "bdev_nvme_set_hotplug", 00:18:37.630 "params": { 00:18:37.630 "period_us": 100000, 00:18:37.630 "enable": false 00:18:37.630 } 00:18:37.630 }, 00:18:37.630 { 00:18:37.630 "method": "bdev_wait_for_examine" 00:18:37.630 } 00:18:37.630 ] 00:18:37.630 }, 00:18:37.630 { 00:18:37.630 "subsystem": "nbd", 00:18:37.630 "config": [] 00:18:37.630 } 00:18:37.630 ] 00:18:37.630 }' 00:18:37.630 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.630 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.630 [2024-11-20 16:20:08.667976] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:37.630 [2024-11-20 16:20:08.668022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1941674 ] 00:18:37.630 [2024-11-20 16:20:08.739832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.630 [2024-11-20 16:20:08.781268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.889 [2024-11-20 16:20:08.934566] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.461 16:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.461 16:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:38.461 16:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:38.461 Running I/O for 10 seconds... 00:18:40.775 5241.00 IOPS, 20.47 MiB/s [2024-11-20T15:20:12.946Z] 5357.00 IOPS, 20.93 MiB/s [2024-11-20T15:20:13.883Z] 5434.00 IOPS, 21.23 MiB/s [2024-11-20T15:20:14.820Z] 5480.50 IOPS, 21.41 MiB/s [2024-11-20T15:20:15.757Z] 5492.40 IOPS, 21.45 MiB/s [2024-11-20T15:20:16.694Z] 5514.33 IOPS, 21.54 MiB/s [2024-11-20T15:20:17.632Z] 5528.71 IOPS, 21.60 MiB/s [2024-11-20T15:20:19.009Z] 5517.75 IOPS, 21.55 MiB/s [2024-11-20T15:20:19.946Z] 5532.00 IOPS, 21.61 MiB/s [2024-11-20T15:20:19.946Z] 5531.90 IOPS, 21.61 MiB/s 00:18:48.712 Latency(us) 00:18:48.712 [2024-11-20T15:20:19.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.712 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:48.712 Verification LBA range: start 0x0 length 0x2000 00:18:48.712 TLSTESTn1 : 10.01 5536.79 21.63 0.00 0.00 23084.22 4837.18 50681.17 00:18:48.712 [2024-11-20T15:20:19.946Z] =================================================================================================================== 00:18:48.712 [2024-11-20T15:20:19.946Z] Total : 5536.79 21.63 0.00 0.00 23084.22 4837.18 50681.17 00:18:48.712 { 00:18:48.712 "results": [ 00:18:48.712 { 00:18:48.712 "job": "TLSTESTn1", 00:18:48.712 "core_mask": "0x4", 00:18:48.712 "workload": "verify", 00:18:48.712 "status": "finished", 00:18:48.712 "verify_range": { 00:18:48.712 "start": 0, 00:18:48.712 "length": 8192 00:18:48.712 }, 00:18:48.712 "queue_depth": 128, 00:18:48.712 "io_size": 4096, 00:18:48.712 "runtime": 10.013933, 00:18:48.712 "iops": 5536.785596628218, 00:18:48.712 "mibps": 21.628068736828975, 00:18:48.712 "io_failed": 0, 00:18:48.712 "io_timeout": 0, 00:18:48.712 "avg_latency_us": 23084.218398086476, 00:18:48.712 "min_latency_us": 4837.1809523809525, 00:18:48.712 "max_latency_us": 50681.17333333333 00:18:48.712 } 00:18:48.712 ], 00:18:48.712 "core_count": 1 00:18:48.712 } 00:18:48.712 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:48.712 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1941674 00:18:48.712 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1941674 ']' 00:18:48.712 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1941674 00:18:48.712 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:48.712 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.712 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1941674 00:18:48.712 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:48.712 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:48.712 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1941674' 00:18:48.712 killing process with pid 1941674 00:18:48.712 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1941674 00:18:48.713 Received shutdown signal, test time was about 10.000000 seconds 00:18:48.713 00:18:48.713 Latency(us) 00:18:48.713 [2024-11-20T15:20:19.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.713 [2024-11-20T15:20:19.947Z] =================================================================================================================== 00:18:48.713 [2024-11-20T15:20:19.947Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.713 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1941674 00:18:48.713 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1941491 00:18:48.713 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1941491 ']' 00:18:48.713 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1941491 00:18:48.713 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:48.713 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.713 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1941491 00:18:48.713 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:48.713 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:48.713 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1941491' 00:18:48.713 killing process with pid 1941491 00:18:48.713 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1941491 00:18:48.713 16:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1941491 00:18:48.972 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:48.972 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.972 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.972 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.972 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1943543 00:18:48.972 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1943543 00:18:48.972 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1943543 ']' 00:18:48.972 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.972 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:48.972 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.972 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.972 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.972 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.972 [2024-11-20 16:20:20.140627] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:48.972 [2024-11-20 16:20:20.140673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.972 [2024-11-20 16:20:20.201719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.231 [2024-11-20 16:20:20.242607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.231 [2024-11-20 16:20:20.242639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.231 [2024-11-20 16:20:20.242646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.231 [2024-11-20 16:20:20.242651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.231 [2024-11-20 16:20:20.242656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.231 [2024-11-20 16:20:20.243195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.231 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.231 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.231 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.231 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.231 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.231 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.231 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Q6jjTIe8UG 00:18:49.231 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Q6jjTIe8UG 00:18:49.231 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:49.490 [2024-11-20 16:20:20.539398] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.490 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:49.750 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:49.750 [2024-11-20 16:20:20.928387] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.750 [2024-11-20 16:20:20.928588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.750 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:50.009 malloc0 00:18:50.009 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:50.268 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Q6jjTIe8UG 00:18:50.527 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:50.786 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1943834 00:18:50.786 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:50.786 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:50.786 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1943834 /var/tmp/bdevperf.sock 00:18:50.786 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1943834 ']' 00:18:50.786 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.786 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.786 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.786 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.786 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.786 [2024-11-20 16:20:21.822323] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:50.786 [2024-11-20 16:20:21.822375] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1943834 ] 00:18:50.786 [2024-11-20 16:20:21.899087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.786 [2024-11-20 16:20:21.939578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.044 16:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.044 16:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:51.044 16:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q6jjTIe8UG 00:18:51.044 16:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:51.303 [2024-11-20 16:20:22.399526] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.303 nvme0n1 00:18:51.303 16:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:51.562 Running I/O for 1 seconds... 00:18:52.499 5315.00 IOPS, 20.76 MiB/s 00:18:52.499 Latency(us) 00:18:52.499 [2024-11-20T15:20:23.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.499 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:52.499 Verification LBA range: start 0x0 length 0x2000 00:18:52.499 nvme0n1 : 1.01 5375.57 21.00 0.00 0.00 23654.36 5492.54 34203.55 00:18:52.499 [2024-11-20T15:20:23.733Z] =================================================================================================================== 00:18:52.499 [2024-11-20T15:20:23.733Z] Total : 5375.57 21.00 0.00 0.00 23654.36 5492.54 34203.55 00:18:52.499 { 00:18:52.499 "results": [ 00:18:52.499 { 00:18:52.499 "job": "nvme0n1", 00:18:52.499 "core_mask": "0x2", 00:18:52.499 "workload": "verify", 00:18:52.499 "status": "finished", 00:18:52.499 "verify_range": { 00:18:52.499 "start": 0, 00:18:52.499 "length": 8192 00:18:52.499 }, 00:18:52.499 "queue_depth": 128, 00:18:52.499 "io_size": 4096, 00:18:52.499 "runtime": 1.012544, 00:18:52.499 "iops": 5375.568864167878, 00:18:52.499 "mibps": 20.998315875655774, 00:18:52.499 "io_failed": 0, 00:18:52.499 "io_timeout": 0, 00:18:52.499 "avg_latency_us": 23654.36326955548, 00:18:52.499 "min_latency_us": 5492.540952380952, 00:18:52.499 "max_latency_us": 34203.550476190474 00:18:52.499 } 00:18:52.499 ], 00:18:52.499 "core_count": 1 00:18:52.499 } 00:18:52.499 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1943834 00:18:52.499 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1943834 ']' 00:18:52.499 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1943834 00:18:52.499 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.499 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.499 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1943834 00:18:52.499 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:52.499 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:52.499 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1943834' 00:18:52.499 killing process with pid 1943834 00:18:52.499 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1943834 00:18:52.499 Received shutdown signal, test time was about 1.000000 seconds 00:18:52.499 00:18:52.499 Latency(us) 00:18:52.499 [2024-11-20T15:20:23.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.499 [2024-11-20T15:20:23.733Z] =================================================================================================================== 00:18:52.499 [2024-11-20T15:20:23.733Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.499 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1943834 00:18:52.759 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1943543 00:18:52.759 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1943543 ']' 00:18:52.759 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1943543 00:18:52.759 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.759 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.759 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1943543 00:18:52.759 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.759 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.759 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1943543' 00:18:52.759 killing process with pid 1943543 00:18:52.759 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1943543 00:18:52.759 16:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1943543 00:18:53.031 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:53.031 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.031 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.031 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.032 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1944139 00:18:53.032 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1944139 00:18:53.032 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:53.032 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1944139 ']' 00:18:53.032 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.032 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.032 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.032 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.032 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.032 [2024-11-20 16:20:24.084079] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:53.032 [2024-11-20 16:20:24.084133] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.032 [2024-11-20 16:20:24.164376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.032 [2024-11-20 16:20:24.202192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.032 [2024-11-20 16:20:24.202231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.032 [2024-11-20 16:20:24.202238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.032 [2024-11-20 16:20:24.202244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.032 [2024-11-20 16:20:24.202248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.032 [2024-11-20 16:20:24.202805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.291 [2024-11-20 16:20:24.350243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.291 malloc0 00:18:53.291 [2024-11-20 16:20:24.378395] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:53.291 [2024-11-20 16:20:24.378598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1944323 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1944323 /var/tmp/bdevperf.sock 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1944323 ']' 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.291 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.291 [2024-11-20 16:20:24.453642] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:53.291 [2024-11-20 16:20:24.453681] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1944323 ] 00:18:53.551 [2024-11-20 16:20:24.527251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.551 [2024-11-20 16:20:24.567366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.551 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.551 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.551 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q6jjTIe8UG 00:18:53.810 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:53.810 [2024-11-20 16:20:25.028307] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.068 nvme0n1 00:18:54.068 16:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:54.068 Running I/O for 1 seconds... 00:18:55.263 5398.00 IOPS, 21.09 MiB/s 00:18:55.263 Latency(us) 00:18:55.263 [2024-11-20T15:20:26.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.263 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:55.263 Verification LBA range: start 0x0 length 0x2000 00:18:55.263 nvme0n1 : 1.01 5445.99 21.27 0.00 0.00 23328.99 5024.43 21346.01 00:18:55.263 [2024-11-20T15:20:26.497Z] =================================================================================================================== 00:18:55.263 [2024-11-20T15:20:26.497Z] Total : 5445.99 21.27 0.00 0.00 23328.99 5024.43 21346.01 00:18:55.263 { 00:18:55.263 "results": [ 00:18:55.263 { 00:18:55.263 "job": "nvme0n1", 00:18:55.263 "core_mask": "0x2", 00:18:55.263 "workload": "verify", 00:18:55.263 "status": "finished", 00:18:55.263 "verify_range": { 00:18:55.263 "start": 0, 00:18:55.263 "length": 8192 00:18:55.263 }, 00:18:55.263 "queue_depth": 128, 00:18:55.263 "io_size": 4096, 00:18:55.263 "runtime": 1.014692, 00:18:55.263 "iops": 5445.987550902146, 00:18:55.263 "mibps": 21.273388870711507, 00:18:55.263 "io_failed": 0, 00:18:55.263 "io_timeout": 0, 00:18:55.263 "avg_latency_us": 23328.985672578117, 00:18:55.263 "min_latency_us": 5024.426666666666, 00:18:55.263 "max_latency_us": 21346.01142857143 00:18:55.263 } 00:18:55.263 ], 00:18:55.263 "core_count": 1 00:18:55.263 } 00:18:55.263 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:55.263 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.263 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.263 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.263 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:55.263 "subsystems": [ 00:18:55.263 { 00:18:55.263 "subsystem": "keyring", 00:18:55.263 "config": [ 00:18:55.263 { 00:18:55.263 "method": "keyring_file_add_key", 00:18:55.263 "params": { 00:18:55.263 "name": "key0", 00:18:55.263 "path": "/tmp/tmp.Q6jjTIe8UG" 00:18:55.263 } 00:18:55.263 } 00:18:55.263 ] 00:18:55.263 }, 00:18:55.263 { 00:18:55.263 "subsystem": "iobuf", 00:18:55.263 "config": [ 00:18:55.263 { 00:18:55.263 "method": "iobuf_set_options", 00:18:55.263 "params": { 00:18:55.263 "small_pool_count": 8192, 00:18:55.263 "large_pool_count": 1024, 00:18:55.263 "small_bufsize": 8192, 00:18:55.263 "large_bufsize": 135168, 00:18:55.263 "enable_numa": false 00:18:55.263 } 00:18:55.263 } 00:18:55.263 ] 00:18:55.263 }, 00:18:55.263 { 00:18:55.263 "subsystem": "sock", 00:18:55.263 "config": [ 00:18:55.263 { 00:18:55.263 "method": "sock_set_default_impl", 00:18:55.263 "params": { 00:18:55.264 "impl_name": "posix" 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "sock_impl_set_options", 00:18:55.264 "params": { 00:18:55.264 "impl_name": "ssl", 00:18:55.264 "recv_buf_size": 4096, 00:18:55.264 "send_buf_size": 4096, 00:18:55.264 "enable_recv_pipe": true, 00:18:55.264 "enable_quickack": false, 00:18:55.264 "enable_placement_id": 0, 00:18:55.264 "enable_zerocopy_send_server": true, 00:18:55.264 "enable_zerocopy_send_client": false, 00:18:55.264 "zerocopy_threshold": 0, 00:18:55.264 "tls_version": 0, 00:18:55.264 "enable_ktls": false 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "sock_impl_set_options", 00:18:55.264 "params": { 00:18:55.264 "impl_name": "posix", 00:18:55.264 "recv_buf_size": 2097152, 00:18:55.264 "send_buf_size": 2097152, 00:18:55.264 "enable_recv_pipe": true, 00:18:55.264 "enable_quickack": false, 00:18:55.264 "enable_placement_id": 0, 00:18:55.264 "enable_zerocopy_send_server": true, 00:18:55.264 "enable_zerocopy_send_client": false, 00:18:55.264 "zerocopy_threshold": 0, 00:18:55.264 "tls_version": 0, 00:18:55.264 "enable_ktls": false 00:18:55.264 } 00:18:55.264 } 00:18:55.264 ] 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "subsystem": "vmd", 00:18:55.264 "config": [] 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "subsystem": "accel", 00:18:55.264 "config": [ 00:18:55.264 { 00:18:55.264 "method": "accel_set_options", 00:18:55.264 "params": { 00:18:55.264 "small_cache_size": 128, 00:18:55.264 "large_cache_size": 16, 00:18:55.264 "task_count": 2048, 00:18:55.264 "sequence_count": 2048, 00:18:55.264 "buf_count": 2048 00:18:55.264 } 00:18:55.264 } 00:18:55.264 ] 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "subsystem": "bdev", 00:18:55.264 "config": [ 00:18:55.264 { 00:18:55.264 "method": "bdev_set_options", 00:18:55.264 "params": { 00:18:55.264 "bdev_io_pool_size": 65535, 00:18:55.264 "bdev_io_cache_size": 256, 00:18:55.264 "bdev_auto_examine": true, 00:18:55.264 "iobuf_small_cache_size": 128, 00:18:55.264 "iobuf_large_cache_size": 16 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "bdev_raid_set_options", 00:18:55.264 "params": { 00:18:55.264 "process_window_size_kb": 1024, 00:18:55.264 "process_max_bandwidth_mb_sec": 0 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "bdev_iscsi_set_options", 00:18:55.264 "params": { 00:18:55.264 "timeout_sec": 30 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "bdev_nvme_set_options", 00:18:55.264 "params": { 00:18:55.264 "action_on_timeout": "none", 00:18:55.264 "timeout_us": 0, 00:18:55.264 "timeout_admin_us": 0, 00:18:55.264 "keep_alive_timeout_ms": 10000, 00:18:55.264 "arbitration_burst": 0, 00:18:55.264 "low_priority_weight": 0, 00:18:55.264 "medium_priority_weight": 0, 00:18:55.264 "high_priority_weight": 0, 00:18:55.264 "nvme_adminq_poll_period_us": 10000, 00:18:55.264 "nvme_ioq_poll_period_us": 0, 00:18:55.264 "io_queue_requests": 0, 00:18:55.264 "delay_cmd_submit": true, 00:18:55.264 "transport_retry_count": 4, 00:18:55.264 "bdev_retry_count": 3, 00:18:55.264 "transport_ack_timeout": 0, 00:18:55.264 "ctrlr_loss_timeout_sec": 0, 00:18:55.264 "reconnect_delay_sec": 0, 00:18:55.264 "fast_io_fail_timeout_sec": 0, 00:18:55.264 "disable_auto_failback": false, 00:18:55.264 "generate_uuids": false, 00:18:55.264 "transport_tos": 0, 00:18:55.264 "nvme_error_stat": false, 00:18:55.264 "rdma_srq_size": 0, 00:18:55.264 "io_path_stat": false, 00:18:55.264 "allow_accel_sequence": false, 00:18:55.264 "rdma_max_cq_size": 0, 00:18:55.264 "rdma_cm_event_timeout_ms": 0, 00:18:55.264 "dhchap_digests": [ 00:18:55.264 "sha256", 00:18:55.264 "sha384", 00:18:55.264 "sha512" 00:18:55.264 ], 00:18:55.264 "dhchap_dhgroups": [ 00:18:55.264 "null", 00:18:55.264 "ffdhe2048", 00:18:55.264 "ffdhe3072", 00:18:55.264 "ffdhe4096", 00:18:55.264 "ffdhe6144", 00:18:55.264 "ffdhe8192" 00:18:55.264 ] 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "bdev_nvme_set_hotplug", 00:18:55.264 "params": { 00:18:55.264 "period_us": 100000, 00:18:55.264 "enable": false 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "bdev_malloc_create", 00:18:55.264 "params": { 00:18:55.264 "name": "malloc0", 00:18:55.264 "num_blocks": 8192, 00:18:55.264 "block_size": 4096, 00:18:55.264 "physical_block_size": 4096, 00:18:55.264 "uuid": "63f86f2d-d794-4aa2-a770-cf24878f93f7", 00:18:55.264 "optimal_io_boundary": 0, 00:18:55.264 "md_size": 0, 00:18:55.264 "dif_type": 0, 00:18:55.264 "dif_is_head_of_md": false, 00:18:55.264 "dif_pi_format": 0 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "bdev_wait_for_examine" 00:18:55.264 } 00:18:55.264 ] 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "subsystem": "nbd", 00:18:55.264 "config": [] 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "subsystem": "scheduler", 00:18:55.264 "config": [ 00:18:55.264 { 00:18:55.264 "method": "framework_set_scheduler", 00:18:55.264 "params": { 00:18:55.264 "name": "static" 00:18:55.264 } 00:18:55.264 } 00:18:55.264 ] 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "subsystem": "nvmf", 00:18:55.264 "config": [ 00:18:55.264 { 00:18:55.264 "method": "nvmf_set_config", 00:18:55.264 "params": { 00:18:55.264 "discovery_filter": "match_any", 00:18:55.264 "admin_cmd_passthru": { 00:18:55.264 "identify_ctrlr": false 00:18:55.264 }, 00:18:55.264 "dhchap_digests": [ 00:18:55.264 "sha256", 00:18:55.264 "sha384", 00:18:55.264 "sha512" 00:18:55.264 ], 00:18:55.264 "dhchap_dhgroups": [ 00:18:55.264 "null", 00:18:55.264 "ffdhe2048", 00:18:55.264 "ffdhe3072", 00:18:55.264 "ffdhe4096", 00:18:55.264 "ffdhe6144", 00:18:55.264 "ffdhe8192" 00:18:55.264 ] 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "nvmf_set_max_subsystems", 00:18:55.264 "params": { 00:18:55.264 "max_subsystems": 1024 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "nvmf_set_crdt", 00:18:55.264 "params": { 00:18:55.264 "crdt1": 0, 00:18:55.264 "crdt2": 0, 00:18:55.264 "crdt3": 0 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "nvmf_create_transport", 00:18:55.264 "params": { 00:18:55.264 "trtype": "TCP", 00:18:55.264 "max_queue_depth": 128, 00:18:55.264 "max_io_qpairs_per_ctrlr": 127, 00:18:55.264 "in_capsule_data_size": 4096, 00:18:55.264 "max_io_size": 131072, 00:18:55.264 "io_unit_size": 131072, 00:18:55.264 "max_aq_depth": 128, 00:18:55.264 "num_shared_buffers": 511, 00:18:55.264 "buf_cache_size": 4294967295, 00:18:55.264 "dif_insert_or_strip": false, 00:18:55.264 "zcopy": false, 00:18:55.264 "c2h_success": false, 00:18:55.264 "sock_priority": 0, 00:18:55.264 "abort_timeout_sec": 1, 00:18:55.264 "ack_timeout": 0, 00:18:55.264 "data_wr_pool_size": 0 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "nvmf_create_subsystem", 00:18:55.264 "params": { 00:18:55.264 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.264 "allow_any_host": false, 00:18:55.264 "serial_number": "00000000000000000000", 00:18:55.264 "model_number": "SPDK bdev Controller", 00:18:55.264 "max_namespaces": 32, 00:18:55.264 "min_cntlid": 1, 00:18:55.264 "max_cntlid": 65519, 00:18:55.264 "ana_reporting": false 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "nvmf_subsystem_add_host", 00:18:55.264 "params": { 00:18:55.264 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.264 "host": "nqn.2016-06.io.spdk:host1", 00:18:55.264 "psk": "key0" 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "nvmf_subsystem_add_ns", 00:18:55.264 "params": { 00:18:55.264 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.264 "namespace": { 00:18:55.264 "nsid": 1, 00:18:55.264 "bdev_name": "malloc0", 00:18:55.264 "nguid": "63F86F2DD7944AA2A770CF24878F93F7", 00:18:55.264 "uuid": "63f86f2d-d794-4aa2-a770-cf24878f93f7", 00:18:55.264 "no_auto_visible": false 00:18:55.264 } 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "method": "nvmf_subsystem_add_listener", 00:18:55.264 "params": { 00:18:55.264 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.264 "listen_address": { 00:18:55.264 "trtype": "TCP", 00:18:55.264 "adrfam": "IPv4", 00:18:55.264 "traddr": "10.0.0.2", 00:18:55.264 "trsvcid": "4420" 00:18:55.264 }, 00:18:55.264 "secure_channel": false, 00:18:55.264 "sock_impl": "ssl" 00:18:55.264 } 00:18:55.264 } 00:18:55.264 ] 00:18:55.264 } 00:18:55.264 ] 00:18:55.264 }' 00:18:55.264 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:55.524 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:55.524 "subsystems": [ 00:18:55.524 { 00:18:55.524 "subsystem": "keyring", 00:18:55.524 "config": [ 00:18:55.524 { 00:18:55.524 "method": "keyring_file_add_key", 00:18:55.524 "params": { 00:18:55.524 "name": "key0", 00:18:55.524 "path": "/tmp/tmp.Q6jjTIe8UG" 00:18:55.524 } 00:18:55.524 } 00:18:55.524 ] 00:18:55.524 }, 00:18:55.524 { 00:18:55.524 "subsystem": "iobuf", 00:18:55.524 "config": [ 00:18:55.524 { 00:18:55.524 "method": "iobuf_set_options", 00:18:55.524 "params": { 00:18:55.524 "small_pool_count": 8192, 00:18:55.524 "large_pool_count": 1024, 00:18:55.524 "small_bufsize": 8192, 00:18:55.524 "large_bufsize": 135168, 00:18:55.524 "enable_numa": false 00:18:55.524 } 00:18:55.524 } 00:18:55.524 ] 00:18:55.524 }, 00:18:55.524 { 00:18:55.524 "subsystem": "sock", 00:18:55.524 "config": [ 00:18:55.524 { 00:18:55.524 "method": "sock_set_default_impl", 00:18:55.524 "params": { 00:18:55.524 "impl_name": "posix" 00:18:55.524 } 00:18:55.524 }, 00:18:55.524 { 00:18:55.524 "method": "sock_impl_set_options", 00:18:55.524 "params": { 00:18:55.524 "impl_name": "ssl", 00:18:55.524 "recv_buf_size": 4096, 00:18:55.524 "send_buf_size": 4096, 00:18:55.524 "enable_recv_pipe": true, 00:18:55.524 "enable_quickack": false, 00:18:55.524 "enable_placement_id": 0, 00:18:55.524 "enable_zerocopy_send_server": true, 00:18:55.524 "enable_zerocopy_send_client": false, 00:18:55.524 "zerocopy_threshold": 0, 00:18:55.524 "tls_version": 0, 00:18:55.524 "enable_ktls": false 00:18:55.524 } 00:18:55.524 }, 00:18:55.524 { 00:18:55.524 "method": "sock_impl_set_options", 00:18:55.524 "params": { 00:18:55.524 "impl_name": "posix", 00:18:55.524 "recv_buf_size": 2097152, 00:18:55.524 "send_buf_size": 2097152, 00:18:55.524 "enable_recv_pipe": true, 00:18:55.524 "enable_quickack": false, 00:18:55.524 "enable_placement_id": 0, 00:18:55.524 "enable_zerocopy_send_server": true, 00:18:55.524 "enable_zerocopy_send_client": false, 00:18:55.524 "zerocopy_threshold": 0, 00:18:55.524 "tls_version": 0, 00:18:55.524 "enable_ktls": false 00:18:55.524 } 00:18:55.524 } 00:18:55.524 ] 00:18:55.524 }, 00:18:55.524 { 00:18:55.524 "subsystem": "vmd", 00:18:55.524 "config": [] 00:18:55.524 }, 00:18:55.524 { 00:18:55.524 "subsystem": "accel", 00:18:55.524 "config": [ 00:18:55.524 { 00:18:55.524 "method": "accel_set_options", 00:18:55.524 "params": { 00:18:55.524 "small_cache_size": 128, 00:18:55.524 "large_cache_size": 16, 00:18:55.524 "task_count": 2048, 00:18:55.524 "sequence_count": 2048, 00:18:55.524 "buf_count": 2048 00:18:55.524 } 00:18:55.524 } 00:18:55.524 ] 00:18:55.524 }, 00:18:55.524 { 00:18:55.524 "subsystem": "bdev", 00:18:55.524 "config": [ 00:18:55.524 { 00:18:55.524 "method": "bdev_set_options", 00:18:55.524 "params": { 00:18:55.524 "bdev_io_pool_size": 65535, 00:18:55.524 "bdev_io_cache_size": 256, 00:18:55.524 "bdev_auto_examine": true, 00:18:55.524 "iobuf_small_cache_size": 128, 00:18:55.524 "iobuf_large_cache_size": 16 00:18:55.524 } 00:18:55.524 }, 00:18:55.524 { 00:18:55.524 "method": "bdev_raid_set_options", 00:18:55.524 "params": { 00:18:55.524 "process_window_size_kb": 1024, 00:18:55.524 "process_max_bandwidth_mb_sec": 0 00:18:55.524 } 00:18:55.524 }, 00:18:55.524 { 00:18:55.524 "method": "bdev_iscsi_set_options", 00:18:55.524 "params": { 00:18:55.524 "timeout_sec": 30 00:18:55.524 } 00:18:55.524 }, 00:18:55.524 { 00:18:55.524 "method": "bdev_nvme_set_options", 00:18:55.524 "params": { 00:18:55.524 "action_on_timeout": "none", 00:18:55.524 "timeout_us": 0, 00:18:55.524 "timeout_admin_us": 0, 00:18:55.524 "keep_alive_timeout_ms": 10000, 00:18:55.524 "arbitration_burst": 0, 00:18:55.524 "low_priority_weight": 0, 00:18:55.524 "medium_priority_weight": 0, 00:18:55.524 "high_priority_weight": 0, 00:18:55.524 "nvme_adminq_poll_period_us": 10000, 00:18:55.524 "nvme_ioq_poll_period_us": 0, 00:18:55.524 "io_queue_requests": 512, 00:18:55.524 "delay_cmd_submit": true, 00:18:55.524 "transport_retry_count": 4, 00:18:55.525 "bdev_retry_count": 3, 00:18:55.525 "transport_ack_timeout": 0, 00:18:55.525 "ctrlr_loss_timeout_sec": 0, 00:18:55.525 "reconnect_delay_sec": 0, 00:18:55.525 "fast_io_fail_timeout_sec": 0, 00:18:55.525 "disable_auto_failback": false, 00:18:55.525 "generate_uuids": false, 00:18:55.525 "transport_tos": 0, 00:18:55.525 "nvme_error_stat": false, 00:18:55.525 "rdma_srq_size": 0, 00:18:55.525 "io_path_stat": false, 00:18:55.525 "allow_accel_sequence": false, 00:18:55.525 "rdma_max_cq_size": 0, 00:18:55.525 "rdma_cm_event_timeout_ms": 0, 00:18:55.525 "dhchap_digests": [ 00:18:55.525 "sha256", 00:18:55.525 "sha384", 00:18:55.525 "sha512" 00:18:55.525 ], 00:18:55.525 "dhchap_dhgroups": [ 00:18:55.525 "null", 00:18:55.525 "ffdhe2048", 00:18:55.525 "ffdhe3072", 00:18:55.525 "ffdhe4096", 00:18:55.525 "ffdhe6144", 00:18:55.525 "ffdhe8192" 00:18:55.525 ] 00:18:55.525 } 00:18:55.525 }, 00:18:55.525 { 00:18:55.525 "method": "bdev_nvme_attach_controller", 00:18:55.525 "params": { 00:18:55.525 "name": "nvme0", 00:18:55.525 "trtype": "TCP", 00:18:55.525 "adrfam": "IPv4", 00:18:55.525 "traddr": "10.0.0.2", 00:18:55.525 "trsvcid": "4420", 00:18:55.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.525 "prchk_reftag": false, 00:18:55.525 "prchk_guard": false, 00:18:55.525 "ctrlr_loss_timeout_sec": 0, 00:18:55.525 "reconnect_delay_sec": 0, 00:18:55.525 "fast_io_fail_timeout_sec": 0, 00:18:55.525 "psk": "key0", 00:18:55.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.525 "hdgst": false, 00:18:55.525 "ddgst": false, 00:18:55.525 "multipath": "multipath" 00:18:55.525 } 00:18:55.525 }, 00:18:55.525 { 00:18:55.525 "method": "bdev_nvme_set_hotplug", 00:18:55.525 "params": { 00:18:55.525 "period_us": 100000, 00:18:55.525 "enable": false 00:18:55.525 } 00:18:55.525 }, 00:18:55.525 { 00:18:55.525 "method": "bdev_enable_histogram", 00:18:55.525 "params": { 00:18:55.525 "name": "nvme0n1", 00:18:55.525 "enable": true 00:18:55.525 } 00:18:55.525 }, 00:18:55.525 { 00:18:55.525 "method": "bdev_wait_for_examine" 00:18:55.525 } 00:18:55.525 ] 00:18:55.525 }, 00:18:55.525 { 00:18:55.525 "subsystem": "nbd", 00:18:55.525 "config": [] 00:18:55.525 } 00:18:55.525 ] 00:18:55.525 }' 00:18:55.525 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1944323 00:18:55.525 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1944323 ']' 00:18:55.525 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1944323 00:18:55.525 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:55.525 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.525 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1944323 00:18:55.525 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:55.525 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:55.525 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1944323' 00:18:55.525 killing process with pid 1944323 00:18:55.525 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1944323 00:18:55.525 Received shutdown signal, test time was about 1.000000 seconds 00:18:55.525 00:18:55.525 Latency(us) 00:18:55.525 [2024-11-20T15:20:26.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.525 [2024-11-20T15:20:26.759Z] =================================================================================================================== 00:18:55.525 [2024-11-20T15:20:26.759Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.525 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1944323 00:18:55.784 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1944139 00:18:55.784 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1944139 ']' 00:18:55.784 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1944139 00:18:55.784 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:55.784 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.784 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1944139 00:18:55.784 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.784 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.784 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1944139' 00:18:55.784 killing process with pid 1944139 00:18:55.784 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1944139 00:18:55.784 16:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1944139 00:18:56.044 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:56.044 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:56.044 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.044 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:56.044 "subsystems": [ 00:18:56.044 { 00:18:56.044 "subsystem": "keyring", 00:18:56.044 "config": [ 00:18:56.044 { 00:18:56.044 "method": "keyring_file_add_key", 00:18:56.044 "params": { 00:18:56.044 "name": "key0", 00:18:56.044 "path": "/tmp/tmp.Q6jjTIe8UG" 00:18:56.044 } 00:18:56.044 } 00:18:56.044 ] 00:18:56.044 }, 00:18:56.044 { 00:18:56.044 "subsystem": "iobuf", 00:18:56.044 "config": [ 00:18:56.044 { 00:18:56.044 "method": "iobuf_set_options", 00:18:56.044 "params": { 00:18:56.044 "small_pool_count": 8192, 00:18:56.044 "large_pool_count": 1024, 00:18:56.044 "small_bufsize": 8192, 00:18:56.044 "large_bufsize": 135168, 00:18:56.044 "enable_numa": false 00:18:56.044 } 00:18:56.044 } 00:18:56.044 ] 00:18:56.044 }, 00:18:56.044 { 00:18:56.044 "subsystem": "sock", 00:18:56.044 "config": [ 00:18:56.044 { 00:18:56.044 "method": "sock_set_default_impl", 00:18:56.044 "params": { 00:18:56.044 "impl_name": "posix" 00:18:56.044 } 00:18:56.044 }, 00:18:56.044 { 00:18:56.044 "method": "sock_impl_set_options", 00:18:56.044 "params": { 00:18:56.044 "impl_name": "ssl", 00:18:56.044 "recv_buf_size": 4096, 00:18:56.044 "send_buf_size": 4096, 00:18:56.044 "enable_recv_pipe": true, 00:18:56.044 "enable_quickack": false, 00:18:56.044 "enable_placement_id": 0, 00:18:56.044 "enable_zerocopy_send_server": true, 00:18:56.044 "enable_zerocopy_send_client": false, 00:18:56.044 "zerocopy_threshold": 0, 00:18:56.044 "tls_version": 0, 00:18:56.044 "enable_ktls": false 00:18:56.044 } 00:18:56.044 }, 00:18:56.044 { 00:18:56.044 "method": "sock_impl_set_options", 00:18:56.044 "params": { 00:18:56.044 "impl_name": "posix", 00:18:56.044 "recv_buf_size": 2097152, 00:18:56.044 "send_buf_size": 2097152, 00:18:56.044 "enable_recv_pipe": true, 00:18:56.044 "enable_quickack": false, 00:18:56.044 "enable_placement_id": 0, 00:18:56.044 "enable_zerocopy_send_server": true, 00:18:56.044 "enable_zerocopy_send_client": false, 00:18:56.044 "zerocopy_threshold": 0, 00:18:56.044 "tls_version": 0, 00:18:56.044 "enable_ktls": false 00:18:56.044 } 00:18:56.044 } 00:18:56.044 ] 00:18:56.044 }, 00:18:56.044 { 00:18:56.044 "subsystem": "vmd", 00:18:56.044 "config": [] 00:18:56.044 }, 00:18:56.044 { 00:18:56.044 "subsystem": "accel", 00:18:56.044 "config": [ 00:18:56.044 { 00:18:56.044 "method": "accel_set_options", 00:18:56.044 "params": { 00:18:56.044 "small_cache_size": 128, 00:18:56.044 "large_cache_size": 16, 00:18:56.044 "task_count": 2048, 00:18:56.044 "sequence_count": 2048, 00:18:56.044 "buf_count": 2048 00:18:56.044 } 00:18:56.044 } 00:18:56.044 ] 00:18:56.044 }, 00:18:56.044 { 00:18:56.044 "subsystem": "bdev", 00:18:56.044 "config": [ 00:18:56.044 { 00:18:56.044 "method": "bdev_set_options", 00:18:56.044 "params": { 00:18:56.044 "bdev_io_pool_size": 65535, 00:18:56.044 "bdev_io_cache_size": 256, 00:18:56.044 "bdev_auto_examine": true, 00:18:56.044 "iobuf_small_cache_size": 128, 00:18:56.044 "iobuf_large_cache_size": 16 00:18:56.044 } 00:18:56.044 }, 00:18:56.044 { 00:18:56.044 "method": "bdev_raid_set_options", 00:18:56.044 "params": { 00:18:56.044 "process_window_size_kb": 1024, 00:18:56.044 "process_max_bandwidth_mb_sec": 0 00:18:56.044 } 00:18:56.044 }, 00:18:56.044 { 00:18:56.044 "method": "bdev_iscsi_set_options", 00:18:56.044 "params": { 00:18:56.044 "timeout_sec": 30 00:18:56.044 } 00:18:56.044 }, 00:18:56.044 { 00:18:56.044 "method": "bdev_nvme_set_options", 00:18:56.044 "params": { 00:18:56.044 "action_on_timeout": "none", 00:18:56.044 "timeout_us": 0, 00:18:56.044 "timeout_admin_us": 0, 00:18:56.044 "keep_alive_timeout_ms": 10000, 00:18:56.044 "arbitration_burst": 0, 00:18:56.044 "low_priority_weight": 0, 00:18:56.044 "medium_priority_weight": 0, 00:18:56.044 "high_priority_weight": 0, 00:18:56.044 "nvme_adminq_poll_period_us": 10000, 00:18:56.044 "nvme_ioq_poll_period_us": 0, 00:18:56.044 "io_queue_requests": 0, 00:18:56.044 "delay_cmd_submit": true, 00:18:56.044 "transport_retry_count": 4, 00:18:56.044 "bdev_retry_count": 3, 00:18:56.044 "transport_ack_timeout": 0, 00:18:56.044 "ctrlr_loss_timeout_sec": 0, 00:18:56.044 "reconnect_delay_sec": 0, 00:18:56.044 "fast_io_fail_timeout_sec": 0, 00:18:56.044 "disable_auto_failback": false, 00:18:56.044 "generate_uuids": false, 00:18:56.044 "transport_tos": 0, 00:18:56.044 "nvme_error_stat": false, 00:18:56.044 "rdma_srq_size": 0, 00:18:56.044 "io_path_stat": false, 00:18:56.044 "allow_accel_sequence": false, 00:18:56.044 "rdma_max_cq_size": 0, 00:18:56.044 "rdma_cm_event_timeout_ms": 0, 00:18:56.044 "dhchap_digests": [ 00:18:56.044 "sha256", 00:18:56.044 "sha384", 00:18:56.044 "sha512" 00:18:56.044 ], 00:18:56.044 "dhchap_dhgroups": [ 00:18:56.044 "null", 00:18:56.044 "ffdhe2048", 00:18:56.044 "ffdhe3072", 00:18:56.044 "ffdhe4096", 00:18:56.044 "ffdhe6144", 00:18:56.044 "ffdhe8192" 00:18:56.044 ] 00:18:56.044 } 00:18:56.045 }, 00:18:56.045 { 00:18:56.045 "method": "bdev_nvme_set_hotplug", 00:18:56.045 "params": { 00:18:56.045 "period_us": 100000, 00:18:56.045 "enable": false 00:18:56.045 } 00:18:56.045 }, 00:18:56.045 { 00:18:56.045 "method": "bdev_malloc_create", 00:18:56.045 "params": { 00:18:56.045 "name": "malloc0", 00:18:56.045 "num_blocks": 8192, 00:18:56.045 "block_size": 4096, 00:18:56.045 "physical_block_size": 4096, 00:18:56.045 "uuid": "63f86f2d-d794-4aa2-a770-cf24878f93f7", 00:18:56.045 "optimal_io_boundary": 0, 00:18:56.045 "md_size": 0, 00:18:56.045 "dif_type": 0, 00:18:56.045 "dif_is_head_of_md": false, 00:18:56.045 "dif_pi_format": 0 00:18:56.045 } 00:18:56.045 }, 00:18:56.045 { 00:18:56.045 "method": "bdev_wait_for_examine" 00:18:56.045 } 00:18:56.045 ] 00:18:56.045 }, 00:18:56.045 { 00:18:56.045 "subsystem": "nbd", 00:18:56.045 "config": [] 00:18:56.045 }, 00:18:56.045 { 00:18:56.045 "subsystem": "scheduler", 00:18:56.045 "config": [ 00:18:56.045 { 00:18:56.045 "method": "framework_set_scheduler", 00:18:56.045 "params": { 00:18:56.045 "name": "static" 00:18:56.045 } 00:18:56.045 } 00:18:56.045 ] 00:18:56.045 }, 00:18:56.045 { 00:18:56.045 "subsystem": "nvmf", 00:18:56.045 "config": [ 00:18:56.045 { 00:18:56.045 "method": "nvmf_set_config", 00:18:56.045 "params": { 00:18:56.045 "discovery_filter": "match_any", 00:18:56.045 "admin_cmd_passthru": { 00:18:56.045 "identify_ctrlr": false 00:18:56.045 }, 00:18:56.045 "dhchap_digests": [ 00:18:56.045 "sha256", 00:18:56.045 "sha384", 00:18:56.045 "sha512" 00:18:56.045 ], 00:18:56.045 "dhchap_dhgroups": [ 00:18:56.045 "null", 00:18:56.045 "ffdhe2048", 00:18:56.045 "ffdhe3072", 00:18:56.045 "ffdhe4096", 00:18:56.045 "ffdhe6144", 00:18:56.045 "ffdhe8192" 00:18:56.045 ] 00:18:56.045 } 00:18:56.045 }, 00:18:56.045 { 00:18:56.045 "method": "nvmf_set_max_subsystems", 00:18:56.045 "params": { 00:18:56.045 "max_subsystems": 1024 00:18:56.045 } 00:18:56.045 }, 00:18:56.045 { 00:18:56.045 "method": "nvmf_set_crdt", 00:18:56.045 "params": { 00:18:56.045 "crdt1": 0, 00:18:56.045 "crdt2": 0, 00:18:56.045 "crdt3": 0 00:18:56.045 } 00:18:56.045 }, 00:18:56.045 { 00:18:56.045 "method": "nvmf_create_transport", 00:18:56.045 "params": { 00:18:56.045 "trtype": "TCP", 00:18:56.045 "max_queue_depth": 128, 00:18:56.045 "max_io_qpairs_per_ctrlr": 127, 00:18:56.045 "in_capsule_data_size": 4096, 00:18:56.045 "max_io_size": 131072, 00:18:56.045 "io_unit_size": 131072, 00:18:56.045 "max_aq_depth": 128, 00:18:56.045 "num_shared_buffers": 511, 00:18:56.045 "buf_cache_size": 4294967295, 00:18:56.045 "dif_insert_or_strip": false, 00:18:56.045 "zcopy": false, 00:18:56.045 "c2h_success": false, 00:18:56.045 "sock_priority": 0, 00:18:56.045 "abort_timeout_sec": 1, 00:18:56.045 "ack_timeout": 0, 00:18:56.045 "data_wr_pool_size": 0 00:18:56.045 } 00:18:56.045 }, 00:18:56.045 { 00:18:56.045 "method": "nvmf_create_subsystem", 00:18:56.045 "params": { 00:18:56.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.045 "allow_any_host": false, 00:18:56.045 "serial_number": "00000000000000000000", 00:18:56.045 "model_number": "SPDK bdev Controller", 00:18:56.045 "max_namespaces": 32, 00:18:56.045 "min_cntlid": 1, 00:18:56.045 "max_cntlid": 65519, 00:18:56.045 "ana_reporting": false 00:18:56.045 } 00:18:56.045 }, 00:18:56.045 { 00:18:56.045 "method": "nvmf_subsystem_add_host", 00:18:56.045 "params": { 00:18:56.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.045 "host": "nqn.2016-06.io.spdk:host1", 00:18:56.045 "psk": "key0" 00:18:56.045 } 00:18:56.045 }, 00:18:56.045 { 00:18:56.045 "method": "nvmf_subsystem_add_ns", 00:18:56.045 "params": { 00:18:56.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.045 "namespace": { 00:18:56.045 "nsid": 1, 00:18:56.045 "bdev_name": "malloc0", 00:18:56.045 "nguid": "63F86F2DD7944AA2A770CF24878F93F7", 00:18:56.045 "uuid": "63f86f2d-d794-4aa2-a770-cf24878f93f7", 00:18:56.045 "no_auto_visible": false 00:18:56.045 } 00:18:56.045 } 00:18:56.045 }, 00:18:56.045 { 00:18:56.045 "method": "nvmf_subsystem_add_listener", 00:18:56.045 "params": { 00:18:56.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.045 "listen_address": { 00:18:56.045 "trtype": "TCP", 00:18:56.045 "adrfam": "IPv4", 00:18:56.045 "traddr": "10.0.0.2", 00:18:56.045 "trsvcid": "4420" 00:18:56.045 }, 00:18:56.045 "secure_channel": false, 00:18:56.045 "sock_impl": "ssl" 00:18:56.045 } 00:18:56.045 } 00:18:56.045 ] 00:18:56.045 } 00:18:56.045 ] 00:18:56.045 }' 00:18:56.045 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.045 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1944748 00:18:56.045 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1944748 00:18:56.045 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:56.045 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1944748 ']' 00:18:56.045 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.045 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.045 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.045 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.045 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.045 [2024-11-20 16:20:27.111128] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:56.045 [2024-11-20 16:20:27.111177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.045 [2024-11-20 16:20:27.187342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.045 [2024-11-20 16:20:27.226482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.045 [2024-11-20 16:20:27.226517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.045 [2024-11-20 16:20:27.226526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.045 [2024-11-20 16:20:27.226532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.045 [2024-11-20 16:20:27.226537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.045 [2024-11-20 16:20:27.227094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.304 [2024-11-20 16:20:27.439065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.304 [2024-11-20 16:20:27.471095] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:56.304 [2024-11-20 16:20:27.471317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1944826 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1944826 /var/tmp/bdevperf.sock 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1944826 ']' 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.872 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:56.872 "subsystems": [ 00:18:56.872 { 00:18:56.872 "subsystem": "keyring", 00:18:56.872 "config": [ 00:18:56.872 { 00:18:56.872 "method": "keyring_file_add_key", 00:18:56.872 "params": { 00:18:56.872 "name": "key0", 00:18:56.872 "path": "/tmp/tmp.Q6jjTIe8UG" 00:18:56.872 } 00:18:56.872 } 00:18:56.872 ] 00:18:56.872 }, 00:18:56.872 { 00:18:56.872 "subsystem": "iobuf", 00:18:56.872 "config": [ 00:18:56.872 { 00:18:56.872 "method": "iobuf_set_options", 00:18:56.872 "params": { 00:18:56.872 "small_pool_count": 8192, 00:18:56.872 "large_pool_count": 1024, 00:18:56.872 "small_bufsize": 8192, 00:18:56.872 "large_bufsize": 135168, 00:18:56.872 "enable_numa": false 00:18:56.872 } 00:18:56.872 } 00:18:56.872 ] 00:18:56.872 }, 00:18:56.872 { 00:18:56.872 "subsystem": "sock", 00:18:56.872 "config": [ 00:18:56.872 { 00:18:56.872 "method": "sock_set_default_impl", 00:18:56.872 "params": { 00:18:56.872 "impl_name": "posix" 00:18:56.872 } 00:18:56.872 }, 00:18:56.872 { 00:18:56.872 "method": "sock_impl_set_options", 00:18:56.872 "params": { 00:18:56.872 "impl_name": "ssl", 00:18:56.872 "recv_buf_size": 4096, 00:18:56.872 "send_buf_size": 4096, 00:18:56.872 "enable_recv_pipe": true, 00:18:56.872 "enable_quickack": false, 00:18:56.872 "enable_placement_id": 0, 00:18:56.872 "enable_zerocopy_send_server": true, 00:18:56.872 "enable_zerocopy_send_client": false, 00:18:56.872 "zerocopy_threshold": 0, 00:18:56.872 "tls_version": 0, 00:18:56.872 "enable_ktls": false 00:18:56.872 } 00:18:56.872 }, 00:18:56.872 { 00:18:56.872 "method": "sock_impl_set_options", 00:18:56.872 "params": { 00:18:56.872 "impl_name": "posix", 00:18:56.872 "recv_buf_size": 2097152, 00:18:56.872 "send_buf_size": 2097152, 00:18:56.872 "enable_recv_pipe": true, 00:18:56.872 "enable_quickack": false, 00:18:56.872 "enable_placement_id": 0, 00:18:56.872 "enable_zerocopy_send_server": true, 00:18:56.872 "enable_zerocopy_send_client": false, 00:18:56.872 "zerocopy_threshold": 0, 00:18:56.872 "tls_version": 0, 00:18:56.872 "enable_ktls": false 00:18:56.872 } 00:18:56.872 } 00:18:56.872 ] 00:18:56.872 }, 00:18:56.872 { 00:18:56.872 "subsystem": "vmd", 00:18:56.872 "config": [] 00:18:56.872 }, 00:18:56.872 { 00:18:56.872 "subsystem": "accel", 00:18:56.872 "config": [ 00:18:56.872 { 00:18:56.872 "method": "accel_set_options", 00:18:56.872 "params": { 00:18:56.872 "small_cache_size": 128, 00:18:56.872 "large_cache_size": 16, 00:18:56.872 "task_count": 2048, 00:18:56.872 "sequence_count": 2048, 00:18:56.872 "buf_count": 2048 00:18:56.872 } 00:18:56.872 } 00:18:56.872 ] 00:18:56.872 }, 00:18:56.872 { 00:18:56.872 "subsystem": "bdev", 00:18:56.872 "config": [ 00:18:56.872 { 00:18:56.872 "method": "bdev_set_options", 00:18:56.872 "params": { 00:18:56.872 "bdev_io_pool_size": 65535, 00:18:56.872 "bdev_io_cache_size": 256, 00:18:56.872 "bdev_auto_examine": true, 00:18:56.872 "iobuf_small_cache_size": 128, 00:18:56.872 "iobuf_large_cache_size": 16 00:18:56.872 } 00:18:56.872 }, 00:18:56.872 { 00:18:56.872 "method": "bdev_raid_set_options", 00:18:56.872 "params": { 00:18:56.872 "process_window_size_kb": 1024, 00:18:56.872 "process_max_bandwidth_mb_sec": 0 00:18:56.872 } 00:18:56.872 }, 00:18:56.872 { 00:18:56.872 "method": "bdev_iscsi_set_options", 00:18:56.872 "params": { 00:18:56.872 "timeout_sec": 30 00:18:56.872 } 00:18:56.872 }, 00:18:56.872 { 00:18:56.872 "method": "bdev_nvme_set_options", 00:18:56.872 "params": { 00:18:56.872 "action_on_timeout": "none", 00:18:56.873 "timeout_us": 0, 00:18:56.873 "timeout_admin_us": 0, 00:18:56.873 "keep_alive_timeout_ms": 10000, 00:18:56.873 "arbitration_burst": 0, 00:18:56.873 "low_priority_weight": 0, 00:18:56.873 "medium_priority_weight": 0, 00:18:56.873 "high_priority_weight": 0, 00:18:56.873 "nvme_adminq_poll_period_us": 10000, 00:18:56.873 "nvme_ioq_poll_period_us": 0, 00:18:56.873 "io_queue_requests": 512, 00:18:56.873 "delay_cmd_submit": true, 00:18:56.873 "transport_retry_count": 4, 00:18:56.873 "bdev_retry_count": 3, 00:18:56.873 "transport_ack_timeout": 0, 00:18:56.873 "ctrlr_loss_timeout_sec": 0, 00:18:56.873 "reconnect_delay_sec": 0, 00:18:56.873 "fast_io_fail_timeout_sec": 0, 00:18:56.873 "disable_auto_failback": false, 00:18:56.873 "generate_uuids": false, 00:18:56.873 "transport_tos": 0, 00:18:56.873 "nvme_error_stat": false, 00:18:56.873 "rdma_srq_size": 0, 00:18:56.873 "io_path_stat": false, 00:18:56.873 "allow_accel_sequence": false, 00:18:56.873 "rdma_max_cq_size": 0, 00:18:56.873 "rdma_cm_event_timeout_ms": 0, 00:18:56.873 "dhchap_digests": [ 00:18:56.873 "sha256", 00:18:56.873 "sha384", 00:18:56.873 "sha512" 00:18:56.873 ], 00:18:56.873 "dhchap_dhgroups": [ 00:18:56.873 "null", 00:18:56.873 "ffdhe2048", 00:18:56.873 "ffdhe3072", 00:18:56.873 "ffdhe4096", 00:18:56.873 "ffdhe6144", 00:18:56.873 "ffdhe8192" 00:18:56.873 ] 00:18:56.873 } 00:18:56.873 }, 00:18:56.873 { 00:18:56.873 "method": "bdev_nvme_attach_controller", 00:18:56.873 "params": { 00:18:56.873 "name": "nvme0", 00:18:56.873 "trtype": "TCP", 00:18:56.873 "adrfam": "IPv4", 00:18:56.873 "traddr": "10.0.0.2", 00:18:56.873 "trsvcid": "4420", 00:18:56.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.873 "prchk_reftag": false, 00:18:56.873 "prchk_guard": false, 00:18:56.873 "ctrlr_loss_timeout_sec": 0, 00:18:56.873 "reconnect_delay_sec": 0, 00:18:56.873 "fast_io_fail_timeout_sec": 0, 00:18:56.873 "psk": "key0", 00:18:56.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:56.873 "hdgst": false, 00:18:56.873 "ddgst": false, 00:18:56.873 "multipath": "multipath" 00:18:56.873 } 00:18:56.873 }, 00:18:56.873 { 00:18:56.873 "method": "bdev_nvme_set_hotplug", 00:18:56.873 "params": { 00:18:56.873 "period_us": 100000, 00:18:56.873 "enable": false 00:18:56.873 } 00:18:56.873 }, 00:18:56.873 { 00:18:56.873 "method": "bdev_enable_histogram", 00:18:56.873 "params": { 00:18:56.873 "name": "nvme0n1", 00:18:56.873 "enable": true 00:18:56.873 } 00:18:56.873 }, 00:18:56.873 { 00:18:56.873 "method": "bdev_wait_for_examine" 00:18:56.873 } 00:18:56.873 ] 00:18:56.873 }, 00:18:56.873 { 00:18:56.873 "subsystem": "nbd", 00:18:56.873 "config": [] 00:18:56.873 } 00:18:56.873 ] 00:18:56.873 }' 00:18:56.873 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.873 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.873 [2024-11-20 16:20:28.031332] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:18:56.873 [2024-11-20 16:20:28.031379] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1944826 ] 00:18:57.132 [2024-11-20 16:20:28.105902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.132 [2024-11-20 16:20:28.146523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.132 [2024-11-20 16:20:28.300780] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:57.699 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.699 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:57.699 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:57.699 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:57.958 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.958 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.958 Running I/O for 1 seconds... 00:18:59.335 5208.00 IOPS, 20.34 MiB/s 00:18:59.335 Latency(us) 00:18:59.335 [2024-11-20T15:20:30.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.335 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:59.335 Verification LBA range: start 0x0 length 0x2000 00:18:59.335 nvme0n1 : 1.02 5233.44 20.44 0.00 0.00 24228.12 4743.56 57422.02 00:18:59.335 [2024-11-20T15:20:30.569Z] =================================================================================================================== 00:18:59.335 [2024-11-20T15:20:30.569Z] Total : 5233.44 20.44 0.00 0.00 24228.12 4743.56 57422.02 00:18:59.335 { 00:18:59.335 "results": [ 00:18:59.335 { 00:18:59.335 "job": "nvme0n1", 00:18:59.335 "core_mask": "0x2", 00:18:59.335 "workload": "verify", 00:18:59.335 "status": "finished", 00:18:59.335 "verify_range": { 00:18:59.335 "start": 0, 00:18:59.335 "length": 8192 00:18:59.335 }, 00:18:59.335 "queue_depth": 128, 00:18:59.335 "io_size": 4096, 00:18:59.335 "runtime": 1.019789, 00:18:59.335 "iops": 5233.4355440194, 00:18:59.335 "mibps": 20.443107593825783, 00:18:59.335 "io_failed": 0, 00:18:59.335 "io_timeout": 0, 00:18:59.335 "avg_latency_us": 24228.117203708163, 00:18:59.335 "min_latency_us": 4743.558095238095, 00:18:59.335 "max_latency_us": 57422.01904761905 00:18:59.335 } 00:18:59.335 ], 00:18:59.335 "core_count": 1 00:18:59.335 } 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:59.335 nvmf_trace.0 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1944826 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1944826 ']' 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1944826 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1944826 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1944826' 00:18:59.335 killing process with pid 1944826 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1944826 00:18:59.335 Received shutdown signal, test time was about 1.000000 seconds 00:18:59.335 00:18:59.335 Latency(us) 00:18:59.335 [2024-11-20T15:20:30.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.335 [2024-11-20T15:20:30.569Z] =================================================================================================================== 00:18:59.335 [2024-11-20T15:20:30.569Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1944826 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:59.335 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:59.335 rmmod nvme_tcp 00:18:59.335 rmmod nvme_fabrics 00:18:59.335 rmmod nvme_keyring 00:18:59.336 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:59.336 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:59.336 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:59.336 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1944748 ']' 00:18:59.336 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1944748 00:18:59.336 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1944748 ']' 00:18:59.336 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1944748 00:18:59.336 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:59.336 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.336 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1944748 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1944748' 00:18:59.595 killing process with pid 1944748 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1944748 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1944748 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.595 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.132 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:02.132 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.dz9KKvl5lC /tmp/tmp.PQZ1PaNT3B /tmp/tmp.Q6jjTIe8UG 00:19:02.132 00:19:02.132 real 1m19.909s 00:19:02.132 user 2m2.119s 00:19:02.132 sys 0m30.354s 00:19:02.132 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.132 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.132 ************************************ 00:19:02.132 END TEST nvmf_tls 00:19:02.132 ************************************ 00:19:02.132 16:20:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:02.132 16:20:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:02.132 16:20:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.132 16:20:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:02.132 ************************************ 00:19:02.132 START TEST nvmf_fips 00:19:02.132 ************************************ 00:19:02.132 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:02.132 * Looking for test storage... 00:19:02.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:02.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.132 --rc genhtml_branch_coverage=1 00:19:02.132 --rc genhtml_function_coverage=1 00:19:02.132 --rc genhtml_legend=1 00:19:02.132 --rc geninfo_all_blocks=1 00:19:02.132 --rc geninfo_unexecuted_blocks=1 00:19:02.132 00:19:02.132 ' 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:02.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.132 --rc genhtml_branch_coverage=1 00:19:02.132 --rc genhtml_function_coverage=1 00:19:02.132 --rc genhtml_legend=1 00:19:02.132 --rc geninfo_all_blocks=1 00:19:02.132 --rc geninfo_unexecuted_blocks=1 00:19:02.132 00:19:02.132 ' 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:02.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.132 --rc genhtml_branch_coverage=1 00:19:02.132 --rc genhtml_function_coverage=1 00:19:02.132 --rc genhtml_legend=1 00:19:02.132 --rc geninfo_all_blocks=1 00:19:02.132 --rc geninfo_unexecuted_blocks=1 00:19:02.132 00:19:02.132 ' 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:02.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.132 --rc genhtml_branch_coverage=1 00:19:02.132 --rc genhtml_function_coverage=1 00:19:02.132 --rc genhtml_legend=1 00:19:02.132 --rc geninfo_all_blocks=1 00:19:02.132 --rc geninfo_unexecuted_blocks=1 00:19:02.132 00:19:02.132 ' 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.132 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:02.133 Error setting digest 00:19:02.133 40127CCF8C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:02.133 40127CCF8C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:02.133 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.134 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:02.134 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:02.134 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:02.134 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.134 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.134 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.134 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:02.134 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:02.134 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:02.134 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:08.703 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:08.703 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.703 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:08.704 Found net devices under 0000:86:00.0: cvl_0_0 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:08.704 Found net devices under 0000:86:00.1: cvl_0_1 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.704 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:08.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:19:08.704 00:19:08.704 --- 10.0.0.2 ping statistics --- 00:19:08.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.704 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:19:08.704 00:19:08.704 --- 10.0.0.1 ping statistics --- 00:19:08.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.704 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1948849 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1948849 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1948849 ']' 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.704 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.704 [2024-11-20 16:20:39.305471] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:19:08.704 [2024-11-20 16:20:39.305515] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.704 [2024-11-20 16:20:39.382541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.704 [2024-11-20 16:20:39.424538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.704 [2024-11-20 16:20:39.424575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.704 [2024-11-20 16:20:39.424581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.704 [2024-11-20 16:20:39.424587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.704 [2024-11-20 16:20:39.424592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.704 [2024-11-20 16:20:39.425153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.XQD 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.XQD 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.XQD 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.XQD 00:19:08.964 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:09.223 [2024-11-20 16:20:40.346884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.223 [2024-11-20 16:20:40.362879] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:09.223 [2024-11-20 16:20:40.363102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.223 malloc0 00:19:09.223 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:09.223 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1949101 00:19:09.223 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:09.223 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1949101 /var/tmp/bdevperf.sock 00:19:09.223 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1949101 ']' 00:19:09.223 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.223 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.223 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.223 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.223 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:09.482 [2024-11-20 16:20:40.492852] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:19:09.482 [2024-11-20 16:20:40.492906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1949101 ] 00:19:09.482 [2024-11-20 16:20:40.554567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.482 [2024-11-20 16:20:40.596945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.482 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.482 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:09.482 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.XQD 00:19:09.741 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:10.000 [2024-11-20 16:20:41.057939] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.000 TLSTESTn1 00:19:10.000 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:10.269 Running I/O for 10 seconds... 00:19:12.149 5529.00 IOPS, 21.60 MiB/s [2024-11-20T15:20:44.320Z] 5495.00 IOPS, 21.46 MiB/s [2024-11-20T15:20:45.257Z] 5506.33 IOPS, 21.51 MiB/s [2024-11-20T15:20:46.633Z] 5539.75 IOPS, 21.64 MiB/s [2024-11-20T15:20:47.569Z] 5539.80 IOPS, 21.64 MiB/s [2024-11-20T15:20:48.505Z] 5556.83 IOPS, 21.71 MiB/s [2024-11-20T15:20:49.440Z] 5565.00 IOPS, 21.74 MiB/s [2024-11-20T15:20:50.375Z] 5565.75 IOPS, 21.74 MiB/s [2024-11-20T15:20:51.335Z] 5556.44 IOPS, 21.70 MiB/s [2024-11-20T15:20:51.335Z] 5553.60 IOPS, 21.69 MiB/s 00:19:20.101 Latency(us) 00:19:20.101 [2024-11-20T15:20:51.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.101 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:20.101 Verification LBA range: start 0x0 length 0x2000 00:19:20.101 TLSTESTn1 : 10.02 5556.36 21.70 0.00 0.00 22999.30 5118.05 22469.49 00:19:20.101 [2024-11-20T15:20:51.335Z] =================================================================================================================== 00:19:20.101 [2024-11-20T15:20:51.335Z] Total : 5556.36 21.70 0.00 0.00 22999.30 5118.05 22469.49 00:19:20.101 { 00:19:20.101 "results": [ 00:19:20.101 { 00:19:20.101 "job": "TLSTESTn1", 00:19:20.101 "core_mask": "0x4", 00:19:20.101 "workload": "verify", 00:19:20.101 "status": "finished", 00:19:20.101 "verify_range": { 00:19:20.101 "start": 0, 00:19:20.101 "length": 8192 00:19:20.101 }, 00:19:20.101 "queue_depth": 128, 00:19:20.101 "io_size": 4096, 00:19:20.101 "runtime": 10.01771, 00:19:20.101 "iops": 5556.359686994333, 00:19:20.101 "mibps": 21.704530027321614, 00:19:20.101 "io_failed": 0, 00:19:20.101 "io_timeout": 0, 00:19:20.101 "avg_latency_us": 22999.300112550067, 00:19:20.101 "min_latency_us": 5118.049523809524, 00:19:20.101 "max_latency_us": 22469.485714285714 00:19:20.101 } 00:19:20.101 ], 00:19:20.101 "core_count": 1 00:19:20.101 } 00:19:20.101 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:20.101 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:20.101 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:20.101 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:20.101 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:20.394 nvmf_trace.0 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1949101 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1949101 ']' 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1949101 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1949101 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1949101' 00:19:20.394 killing process with pid 1949101 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1949101 00:19:20.394 Received shutdown signal, test time was about 10.000000 seconds 00:19:20.394 00:19:20.394 Latency(us) 00:19:20.394 [2024-11-20T15:20:51.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.394 [2024-11-20T15:20:51.628Z] =================================================================================================================== 00:19:20.394 [2024-11-20T15:20:51.628Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1949101 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:20.394 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:20.394 rmmod nvme_tcp 00:19:20.690 rmmod nvme_fabrics 00:19:20.690 rmmod nvme_keyring 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1948849 ']' 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1948849 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1948849 ']' 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1948849 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1948849 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1948849' 00:19:20.690 killing process with pid 1948849 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1948849 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1948849 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.690 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.251 16:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:23.252 16:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.XQD 00:19:23.252 00:19:23.252 real 0m21.050s 00:19:23.252 user 0m22.067s 00:19:23.252 sys 0m9.614s 00:19:23.252 16:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.252 16:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:23.252 ************************************ 00:19:23.252 END TEST nvmf_fips 00:19:23.252 ************************************ 00:19:23.252 16:20:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:23.252 16:20:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:23.252 16:20:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.252 16:20:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:23.252 ************************************ 00:19:23.252 START TEST nvmf_control_msg_list 00:19:23.252 ************************************ 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:23.252 * Looking for test storage... 00:19:23.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:23.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.252 --rc genhtml_branch_coverage=1 00:19:23.252 --rc genhtml_function_coverage=1 00:19:23.252 --rc genhtml_legend=1 00:19:23.252 --rc geninfo_all_blocks=1 00:19:23.252 --rc geninfo_unexecuted_blocks=1 00:19:23.252 00:19:23.252 ' 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:23.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.252 --rc genhtml_branch_coverage=1 00:19:23.252 --rc genhtml_function_coverage=1 00:19:23.252 --rc genhtml_legend=1 00:19:23.252 --rc geninfo_all_blocks=1 00:19:23.252 --rc geninfo_unexecuted_blocks=1 00:19:23.252 00:19:23.252 ' 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:23.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.252 --rc genhtml_branch_coverage=1 00:19:23.252 --rc genhtml_function_coverage=1 00:19:23.252 --rc genhtml_legend=1 00:19:23.252 --rc geninfo_all_blocks=1 00:19:23.252 --rc geninfo_unexecuted_blocks=1 00:19:23.252 00:19:23.252 ' 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:23.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.252 --rc genhtml_branch_coverage=1 00:19:23.252 --rc genhtml_function_coverage=1 00:19:23.252 --rc genhtml_legend=1 00:19:23.252 --rc geninfo_all_blocks=1 00:19:23.252 --rc geninfo_unexecuted_blocks=1 00:19:23.252 00:19:23.252 ' 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:23.252 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:23.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:23.253 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.818 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.818 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:29.819 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:29.819 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:29.819 Found net devices under 0000:86:00.0: cvl_0_0 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:29.819 Found net devices under 0000:86:00.1: cvl_0_1 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.819 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.819 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.819 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.819 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:29.819 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.819 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.819 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.819 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:29.819 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:29.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:19:29.820 00:19:29.820 --- 10.0.0.2 ping statistics --- 00:19:29.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.820 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:19:29.820 00:19:29.820 --- 10.0.0.1 ping statistics --- 00:19:29.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.820 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1954490 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1954490 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1954490 ']' 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.820 [2024-11-20 16:21:00.250727] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:19:29.820 [2024-11-20 16:21:00.250772] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.820 [2024-11-20 16:21:00.329534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.820 [2024-11-20 16:21:00.370545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.820 [2024-11-20 16:21:00.370581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.820 [2024-11-20 16:21:00.370589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.820 [2024-11-20 16:21:00.370596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.820 [2024-11-20 16:21:00.370601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.820 [2024-11-20 16:21:00.371172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.820 [2024-11-20 16:21:00.520881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.820 Malloc0 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.820 [2024-11-20 16:21:00.561197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1954533 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1954534 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1954535 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:29.820 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1954533 00:19:29.820 [2024-11-20 16:21:00.629743] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:29.820 [2024-11-20 16:21:00.629917] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:29.820 [2024-11-20 16:21:00.640093] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:30.756 Initializing NVMe Controllers 00:19:30.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:30.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:30.756 Initialization complete. Launching workers. 00:19:30.756 ======================================================== 00:19:30.756 Latency(us) 00:19:30.756 Device Information : IOPS MiB/s Average min max 00:19:30.756 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6434.99 25.14 155.05 129.69 407.09 00:19:30.756 ======================================================== 00:19:30.756 Total : 6434.99 25.14 155.05 129.69 407.09 00:19:30.756 00:19:30.756 Initializing NVMe Controllers 00:19:30.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:30.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:30.756 Initialization complete. Launching workers. 00:19:30.756 ======================================================== 00:19:30.756 Latency(us) 00:19:30.756 Device Information : IOPS MiB/s Average min max 00:19:30.756 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 31.00 0.12 33000.89 156.15 40983.31 00:19:30.756 ======================================================== 00:19:30.756 Total : 31.00 0.12 33000.89 156.15 40983.31 00:19:30.756 00:19:30.756 Initializing NVMe Controllers 00:19:30.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:30.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:30.756 Initialization complete. Launching workers. 00:19:30.756 ======================================================== 00:19:30.756 Latency(us) 00:19:30.756 Device Information : IOPS MiB/s Average min max 00:19:30.756 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40919.06 40407.36 41999.34 00:19:30.756 ======================================================== 00:19:30.756 Total : 25.00 0.10 40919.06 40407.36 41999.34 00:19:30.756 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1954534 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1954535 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:30.756 rmmod nvme_tcp 00:19:30.756 rmmod nvme_fabrics 00:19:30.756 rmmod nvme_keyring 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1954490 ']' 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1954490 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1954490 ']' 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1954490 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1954490 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1954490' 00:19:30.756 killing process with pid 1954490 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1954490 00:19:30.756 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1954490 00:19:31.015 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:31.015 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:31.015 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:31.015 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:31.015 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:31.015 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:31.015 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:31.015 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:31.015 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:31.015 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.015 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.015 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:33.549 00:19:33.549 real 0m10.172s 00:19:33.549 user 0m6.645s 00:19:33.549 sys 0m5.427s 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.549 ************************************ 00:19:33.549 END TEST nvmf_control_msg_list 00:19:33.549 ************************************ 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.549 ************************************ 00:19:33.549 START TEST nvmf_wait_for_buf 00:19:33.549 ************************************ 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:33.549 * Looking for test storage... 00:19:33.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:33.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.549 --rc genhtml_branch_coverage=1 00:19:33.549 --rc genhtml_function_coverage=1 00:19:33.549 --rc genhtml_legend=1 00:19:33.549 --rc geninfo_all_blocks=1 00:19:33.549 --rc geninfo_unexecuted_blocks=1 00:19:33.549 00:19:33.549 ' 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:33.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.549 --rc genhtml_branch_coverage=1 00:19:33.549 --rc genhtml_function_coverage=1 00:19:33.549 --rc genhtml_legend=1 00:19:33.549 --rc geninfo_all_blocks=1 00:19:33.549 --rc geninfo_unexecuted_blocks=1 00:19:33.549 00:19:33.549 ' 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:33.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.549 --rc genhtml_branch_coverage=1 00:19:33.549 --rc genhtml_function_coverage=1 00:19:33.549 --rc genhtml_legend=1 00:19:33.549 --rc geninfo_all_blocks=1 00:19:33.549 --rc geninfo_unexecuted_blocks=1 00:19:33.549 00:19:33.549 ' 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:33.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.549 --rc genhtml_branch_coverage=1 00:19:33.549 --rc genhtml_function_coverage=1 00:19:33.549 --rc genhtml_legend=1 00:19:33.549 --rc geninfo_all_blocks=1 00:19:33.549 --rc geninfo_unexecuted_blocks=1 00:19:33.549 00:19:33.549 ' 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.549 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:33.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:33.550 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:40.120 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.120 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:40.121 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:40.121 Found net devices under 0000:86:00.0: cvl_0_0 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:40.121 Found net devices under 0000:86:00.1: cvl_0_1 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:40.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:19:40.121 00:19:40.121 --- 10.0.0.2 ping statistics --- 00:19:40.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.121 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:19:40.121 00:19:40.121 --- 10.0.0.1 ping statistics --- 00:19:40.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.121 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1958764 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1958764 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1958764 ']' 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.121 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.121 [2024-11-20 16:21:10.525770] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:19:40.122 [2024-11-20 16:21:10.525821] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.122 [2024-11-20 16:21:10.607807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.122 [2024-11-20 16:21:10.651034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.122 [2024-11-20 16:21:10.651066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.122 [2024-11-20 16:21:10.651073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.122 [2024-11-20 16:21:10.651079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.122 [2024-11-20 16:21:10.651084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.122 [2024-11-20 16:21:10.651653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.381 Malloc0 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.381 [2024-11-20 16:21:11.520335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.381 [2024-11-20 16:21:11.548523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.381 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:40.640 [2024-11-20 16:21:11.635222] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:42.017 Initializing NVMe Controllers 00:19:42.017 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:42.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:42.017 Initialization complete. Launching workers. 00:19:42.017 ======================================================== 00:19:42.017 Latency(us) 00:19:42.017 Device Information : IOPS MiB/s Average min max 00:19:42.017 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 29.00 3.62 141888.87 7291.23 191536.98 00:19:42.017 ======================================================== 00:19:42.017 Total : 29.00 3.62 141888.87 7291.23 191536.98 00:19:42.017 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=438 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 438 -eq 0 ]] 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.017 rmmod nvme_tcp 00:19:42.017 rmmod nvme_fabrics 00:19:42.017 rmmod nvme_keyring 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1958764 ']' 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1958764 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1958764 ']' 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1958764 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:42.017 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.018 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1958764 00:19:42.018 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.018 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.018 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1958764' 00:19:42.018 killing process with pid 1958764 00:19:42.018 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1958764 00:19:42.018 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1958764 00:19:42.277 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:42.277 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:42.277 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:42.277 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:42.277 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:42.277 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:42.277 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:42.277 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:42.277 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:42.277 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.277 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.277 16:21:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.183 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:44.443 00:19:44.443 real 0m11.135s 00:19:44.443 user 0m4.745s 00:19:44.443 sys 0m5.029s 00:19:44.443 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.443 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:44.443 ************************************ 00:19:44.443 END TEST nvmf_wait_for_buf 00:19:44.443 ************************************ 00:19:44.443 16:21:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:44.443 16:21:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:44.443 16:21:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:44.443 16:21:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:44.443 16:21:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:44.443 16:21:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:51.014 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:51.014 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:51.014 Found net devices under 0000:86:00.0: cvl_0_0 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:51.014 Found net devices under 0000:86:00.1: cvl_0_1 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:51.014 ************************************ 00:19:51.014 START TEST nvmf_perf_adq 00:19:51.014 ************************************ 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:51.014 * Looking for test storage... 00:19:51.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:51.014 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:51.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.015 --rc genhtml_branch_coverage=1 00:19:51.015 --rc genhtml_function_coverage=1 00:19:51.015 --rc genhtml_legend=1 00:19:51.015 --rc geninfo_all_blocks=1 00:19:51.015 --rc geninfo_unexecuted_blocks=1 00:19:51.015 00:19:51.015 ' 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:51.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.015 --rc genhtml_branch_coverage=1 00:19:51.015 --rc genhtml_function_coverage=1 00:19:51.015 --rc genhtml_legend=1 00:19:51.015 --rc geninfo_all_blocks=1 00:19:51.015 --rc geninfo_unexecuted_blocks=1 00:19:51.015 00:19:51.015 ' 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:51.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.015 --rc genhtml_branch_coverage=1 00:19:51.015 --rc genhtml_function_coverage=1 00:19:51.015 --rc genhtml_legend=1 00:19:51.015 --rc geninfo_all_blocks=1 00:19:51.015 --rc geninfo_unexecuted_blocks=1 00:19:51.015 00:19:51.015 ' 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:51.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.015 --rc genhtml_branch_coverage=1 00:19:51.015 --rc genhtml_function_coverage=1 00:19:51.015 --rc genhtml_legend=1 00:19:51.015 --rc geninfo_all_blocks=1 00:19:51.015 --rc geninfo_unexecuted_blocks=1 00:19:51.015 00:19:51.015 ' 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:51.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:51.015 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:56.292 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:56.293 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:56.293 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:56.293 Found net devices under 0000:86:00.0: cvl_0_0 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:56.293 Found net devices under 0000:86:00.1: cvl_0_1 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:56.293 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:57.229 16:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:59.140 16:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:04.419 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:04.419 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:04.419 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:04.420 Found net devices under 0000:86:00.0: cvl_0_0 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:04.420 Found net devices under 0000:86:00.1: cvl_0_1 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:04.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:20:04.420 00:20:04.420 --- 10.0.0.2 ping statistics --- 00:20:04.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.420 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:04.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:20:04.420 00:20:04.420 --- 10.0.0.1 ping statistics --- 00:20:04.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.420 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1967117 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1967117 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1967117 ']' 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.420 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.420 [2024-11-20 16:21:35.590397] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:20:04.420 [2024-11-20 16:21:35.590441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.680 [2024-11-20 16:21:35.654857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.680 [2024-11-20 16:21:35.695649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.680 [2024-11-20 16:21:35.695687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.680 [2024-11-20 16:21:35.695695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.680 [2024-11-20 16:21:35.695701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.680 [2024-11-20 16:21:35.695706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.680 [2024-11-20 16:21:35.697283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.680 [2024-11-20 16:21:35.697389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.680 [2024-11-20 16:21:35.697494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.680 [2024-11-20 16:21:35.697496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.680 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.940 [2024-11-20 16:21:35.936353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.940 Malloc1 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.940 [2024-11-20 16:21:35.992047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1967357 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:04.940 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:06.846 16:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:06.846 16:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.846 16:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.846 16:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.846 16:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:06.846 "tick_rate": 2100000000, 00:20:06.846 "poll_groups": [ 00:20:06.846 { 00:20:06.846 "name": "nvmf_tgt_poll_group_000", 00:20:06.846 "admin_qpairs": 1, 00:20:06.846 "io_qpairs": 1, 00:20:06.846 "current_admin_qpairs": 1, 00:20:06.846 "current_io_qpairs": 1, 00:20:06.846 "pending_bdev_io": 0, 00:20:06.846 "completed_nvme_io": 20023, 00:20:06.846 "transports": [ 00:20:06.846 { 00:20:06.847 "trtype": "TCP" 00:20:06.847 } 00:20:06.847 ] 00:20:06.847 }, 00:20:06.847 { 00:20:06.847 "name": "nvmf_tgt_poll_group_001", 00:20:06.847 "admin_qpairs": 0, 00:20:06.847 "io_qpairs": 1, 00:20:06.847 "current_admin_qpairs": 0, 00:20:06.847 "current_io_qpairs": 1, 00:20:06.847 "pending_bdev_io": 0, 00:20:06.847 "completed_nvme_io": 20334, 00:20:06.847 "transports": [ 00:20:06.847 { 00:20:06.847 "trtype": "TCP" 00:20:06.847 } 00:20:06.847 ] 00:20:06.847 }, 00:20:06.847 { 00:20:06.847 "name": "nvmf_tgt_poll_group_002", 00:20:06.847 "admin_qpairs": 0, 00:20:06.847 "io_qpairs": 1, 00:20:06.847 "current_admin_qpairs": 0, 00:20:06.847 "current_io_qpairs": 1, 00:20:06.847 "pending_bdev_io": 0, 00:20:06.847 "completed_nvme_io": 20309, 00:20:06.847 "transports": [ 00:20:06.847 { 00:20:06.847 "trtype": "TCP" 00:20:06.847 } 00:20:06.847 ] 00:20:06.847 }, 00:20:06.847 { 00:20:06.847 "name": "nvmf_tgt_poll_group_003", 00:20:06.847 "admin_qpairs": 0, 00:20:06.847 "io_qpairs": 1, 00:20:06.847 "current_admin_qpairs": 0, 00:20:06.847 "current_io_qpairs": 1, 00:20:06.847 "pending_bdev_io": 0, 00:20:06.847 "completed_nvme_io": 20247, 00:20:06.847 "transports": [ 00:20:06.847 { 00:20:06.847 "trtype": "TCP" 00:20:06.847 } 00:20:06.847 ] 00:20:06.847 } 00:20:06.847 ] 00:20:06.847 }' 00:20:06.847 16:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:06.847 16:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:06.847 16:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:07.105 16:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:07.105 16:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1967357 00:20:15.252 Initializing NVMe Controllers 00:20:15.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:15.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:15.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:15.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:15.252 Initialization complete. Launching workers. 00:20:15.252 ======================================================== 00:20:15.252 Latency(us) 00:20:15.252 Device Information : IOPS MiB/s Average min max 00:20:15.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10440.29 40.78 6131.87 2125.66 10921.02 00:20:15.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10588.09 41.36 6044.22 2485.75 11697.86 00:20:15.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10512.29 41.06 6089.16 2322.51 13695.30 00:20:15.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10472.79 40.91 6112.84 1844.18 10575.19 00:20:15.252 ======================================================== 00:20:15.252 Total : 42013.47 164.12 6094.35 1844.18 13695.30 00:20:15.252 00:20:15.252 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:15.252 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:15.252 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:15.252 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:15.252 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:15.252 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:15.252 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:15.252 rmmod nvme_tcp 00:20:15.252 rmmod nvme_fabrics 00:20:15.252 rmmod nvme_keyring 00:20:15.252 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:15.252 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:15.252 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:15.252 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1967117 ']' 00:20:15.252 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1967117 00:20:15.252 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1967117 ']' 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1967117 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967117 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967117' 00:20:15.253 killing process with pid 1967117 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1967117 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1967117 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.253 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.788 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:17.788 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:17.788 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:17.788 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:18.726 16:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:20.630 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:25.908 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:25.908 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:25.908 Found net devices under 0000:86:00.0: cvl_0_0 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:25.908 Found net devices under 0000:86:00.1: cvl_0_1 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.908 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:25.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:20:25.909 00:20:25.909 --- 10.0.0.2 ping statistics --- 00:20:25.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.909 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:20:25.909 00:20:25.909 --- 10.0.0.1 ping statistics --- 00:20:25.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.909 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:25.909 net.core.busy_poll = 1 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:25.909 net.core.busy_read = 1 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:25.909 16:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:25.909 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:25.909 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1971044 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1971044 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1971044 ']' 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.169 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.169 [2024-11-20 16:21:57.254622] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:20:26.169 [2024-11-20 16:21:57.254673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.169 [2024-11-20 16:21:57.333597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.169 [2024-11-20 16:21:57.376053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.169 [2024-11-20 16:21:57.376089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.169 [2024-11-20 16:21:57.376096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.169 [2024-11-20 16:21:57.376105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.169 [2024-11-20 16:21:57.376110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.169 [2024-11-20 16:21:57.377545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.169 [2024-11-20 16:21:57.377654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.169 [2024-11-20 16:21:57.377764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.169 [2024-11-20 16:21:57.377765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.107 [2024-11-20 16:21:58.261001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.107 Malloc1 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.107 [2024-11-20 16:21:58.317533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1971183 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:27.107 16:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:29.643 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:29.643 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.643 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.643 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.643 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:29.643 "tick_rate": 2100000000, 00:20:29.643 "poll_groups": [ 00:20:29.643 { 00:20:29.643 "name": "nvmf_tgt_poll_group_000", 00:20:29.643 "admin_qpairs": 1, 00:20:29.643 "io_qpairs": 2, 00:20:29.643 "current_admin_qpairs": 1, 00:20:29.643 "current_io_qpairs": 2, 00:20:29.643 "pending_bdev_io": 0, 00:20:29.643 "completed_nvme_io": 28551, 00:20:29.643 "transports": [ 00:20:29.643 { 00:20:29.643 "trtype": "TCP" 00:20:29.643 } 00:20:29.643 ] 00:20:29.643 }, 00:20:29.643 { 00:20:29.643 "name": "nvmf_tgt_poll_group_001", 00:20:29.643 "admin_qpairs": 0, 00:20:29.643 "io_qpairs": 2, 00:20:29.643 "current_admin_qpairs": 0, 00:20:29.643 "current_io_qpairs": 2, 00:20:29.643 "pending_bdev_io": 0, 00:20:29.643 "completed_nvme_io": 28796, 00:20:29.643 "transports": [ 00:20:29.643 { 00:20:29.643 "trtype": "TCP" 00:20:29.643 } 00:20:29.643 ] 00:20:29.643 }, 00:20:29.643 { 00:20:29.643 "name": "nvmf_tgt_poll_group_002", 00:20:29.644 "admin_qpairs": 0, 00:20:29.644 "io_qpairs": 0, 00:20:29.644 "current_admin_qpairs": 0, 00:20:29.644 "current_io_qpairs": 0, 00:20:29.644 "pending_bdev_io": 0, 00:20:29.644 "completed_nvme_io": 0, 00:20:29.644 "transports": [ 00:20:29.644 { 00:20:29.644 "trtype": "TCP" 00:20:29.644 } 00:20:29.644 ] 00:20:29.644 }, 00:20:29.644 { 00:20:29.644 "name": "nvmf_tgt_poll_group_003", 00:20:29.644 "admin_qpairs": 0, 00:20:29.644 "io_qpairs": 0, 00:20:29.644 "current_admin_qpairs": 0, 00:20:29.644 "current_io_qpairs": 0, 00:20:29.644 "pending_bdev_io": 0, 00:20:29.644 "completed_nvme_io": 0, 00:20:29.644 "transports": [ 00:20:29.644 { 00:20:29.644 "trtype": "TCP" 00:20:29.644 } 00:20:29.644 ] 00:20:29.644 } 00:20:29.644 ] 00:20:29.644 }' 00:20:29.644 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:29.644 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:29.644 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:29.644 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:29.644 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1971183 00:20:37.827 Initializing NVMe Controllers 00:20:37.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:37.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:37.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:37.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:37.827 Initialization complete. Launching workers. 00:20:37.827 ======================================================== 00:20:37.827 Latency(us) 00:20:37.827 Device Information : IOPS MiB/s Average min max 00:20:37.827 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8068.70 31.52 7932.22 1491.80 53018.81 00:20:37.827 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7221.10 28.21 8862.54 1094.77 52242.74 00:20:37.827 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8239.70 32.19 7767.11 1049.37 53061.26 00:20:37.827 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6919.50 27.03 9272.75 1468.07 53284.78 00:20:37.827 ======================================================== 00:20:37.827 Total : 30449.00 118.94 8412.80 1049.37 53284.78 00:20:37.827 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.827 rmmod nvme_tcp 00:20:37.827 rmmod nvme_fabrics 00:20:37.827 rmmod nvme_keyring 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1971044 ']' 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1971044 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1971044 ']' 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1971044 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1971044 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1971044' 00:20:37.827 killing process with pid 1971044 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1971044 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1971044 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.827 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.733 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:39.733 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:39.733 00:20:39.733 real 0m49.769s 00:20:39.733 user 2m46.699s 00:20:39.733 sys 0m10.309s 00:20:39.733 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.733 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.733 ************************************ 00:20:39.733 END TEST nvmf_perf_adq 00:20:39.733 ************************************ 00:20:39.733 16:22:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:39.733 16:22:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:39.733 16:22:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.733 16:22:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:39.994 ************************************ 00:20:39.994 START TEST nvmf_shutdown 00:20:39.994 ************************************ 00:20:39.994 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:39.994 * Looking for test storage... 00:20:39.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:39.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.994 --rc genhtml_branch_coverage=1 00:20:39.994 --rc genhtml_function_coverage=1 00:20:39.994 --rc genhtml_legend=1 00:20:39.994 --rc geninfo_all_blocks=1 00:20:39.994 --rc geninfo_unexecuted_blocks=1 00:20:39.994 00:20:39.994 ' 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:39.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.994 --rc genhtml_branch_coverage=1 00:20:39.994 --rc genhtml_function_coverage=1 00:20:39.994 --rc genhtml_legend=1 00:20:39.994 --rc geninfo_all_blocks=1 00:20:39.994 --rc geninfo_unexecuted_blocks=1 00:20:39.994 00:20:39.994 ' 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:39.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.994 --rc genhtml_branch_coverage=1 00:20:39.994 --rc genhtml_function_coverage=1 00:20:39.994 --rc genhtml_legend=1 00:20:39.994 --rc geninfo_all_blocks=1 00:20:39.994 --rc geninfo_unexecuted_blocks=1 00:20:39.994 00:20:39.994 ' 00:20:39.994 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:39.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.994 --rc genhtml_branch_coverage=1 00:20:39.994 --rc genhtml_function_coverage=1 00:20:39.994 --rc genhtml_legend=1 00:20:39.995 --rc geninfo_all_blocks=1 00:20:39.995 --rc geninfo_unexecuted_blocks=1 00:20:39.995 00:20:39.995 ' 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:39.995 ************************************ 00:20:39.995 START TEST nvmf_shutdown_tc1 00:20:39.995 ************************************ 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:39.995 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:46.567 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:46.567 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:46.568 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:46.568 Found net devices under 0000:86:00.0: cvl_0_0 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:46.568 Found net devices under 0000:86:00.1: cvl_0_1 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.568 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:46.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:20:46.568 00:20:46.568 --- 10.0.0.2 ping statistics --- 00:20:46.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.568 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:20:46.568 00:20:46.568 --- 10.0.0.1 ping statistics --- 00:20:46.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.568 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1976551 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1976551 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1976551 ']' 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.568 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.568 [2024-11-20 16:22:17.315867] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:20:46.568 [2024-11-20 16:22:17.315910] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.568 [2024-11-20 16:22:17.395353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.568 [2024-11-20 16:22:17.437187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.569 [2024-11-20 16:22:17.437228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.569 [2024-11-20 16:22:17.437235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.569 [2024-11-20 16:22:17.437241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.569 [2024-11-20 16:22:17.437246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.569 [2024-11-20 16:22:17.438748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.569 [2024-11-20 16:22:17.438765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.569 [2024-11-20 16:22:17.438859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.569 [2024-11-20 16:22:17.438859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.569 [2024-11-20 16:22:17.587718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.569 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.569 Malloc1 00:20:46.569 [2024-11-20 16:22:17.701700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.569 Malloc2 00:20:46.569 Malloc3 00:20:46.828 Malloc4 00:20:46.828 Malloc5 00:20:46.828 Malloc6 00:20:46.828 Malloc7 00:20:46.828 Malloc8 00:20:46.828 Malloc9 00:20:47.089 Malloc10 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1976693 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1976693 /var/tmp/bdevperf.sock 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1976693 ']' 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.089 { 00:20:47.089 "params": { 00:20:47.089 "name": "Nvme$subsystem", 00:20:47.089 "trtype": "$TEST_TRANSPORT", 00:20:47.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.089 "adrfam": "ipv4", 00:20:47.089 "trsvcid": "$NVMF_PORT", 00:20:47.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.089 "hdgst": ${hdgst:-false}, 00:20:47.089 "ddgst": ${ddgst:-false} 00:20:47.089 }, 00:20:47.089 "method": "bdev_nvme_attach_controller" 00:20:47.089 } 00:20:47.089 EOF 00:20:47.089 )") 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.089 { 00:20:47.089 "params": { 00:20:47.089 "name": "Nvme$subsystem", 00:20:47.089 "trtype": "$TEST_TRANSPORT", 00:20:47.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.089 "adrfam": "ipv4", 00:20:47.089 "trsvcid": "$NVMF_PORT", 00:20:47.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.089 "hdgst": ${hdgst:-false}, 00:20:47.089 "ddgst": ${ddgst:-false} 00:20:47.089 }, 00:20:47.089 "method": "bdev_nvme_attach_controller" 00:20:47.089 } 00:20:47.089 EOF 00:20:47.089 )") 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.089 { 00:20:47.089 "params": { 00:20:47.089 "name": "Nvme$subsystem", 00:20:47.089 "trtype": "$TEST_TRANSPORT", 00:20:47.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.089 "adrfam": "ipv4", 00:20:47.089 "trsvcid": "$NVMF_PORT", 00:20:47.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.089 "hdgst": ${hdgst:-false}, 00:20:47.089 "ddgst": ${ddgst:-false} 00:20:47.089 }, 00:20:47.089 "method": "bdev_nvme_attach_controller" 00:20:47.089 } 00:20:47.089 EOF 00:20:47.089 )") 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.089 { 00:20:47.089 "params": { 00:20:47.089 "name": "Nvme$subsystem", 00:20:47.089 "trtype": "$TEST_TRANSPORT", 00:20:47.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.089 "adrfam": "ipv4", 00:20:47.089 "trsvcid": "$NVMF_PORT", 00:20:47.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.089 "hdgst": ${hdgst:-false}, 00:20:47.089 "ddgst": ${ddgst:-false} 00:20:47.089 }, 00:20:47.089 "method": "bdev_nvme_attach_controller" 00:20:47.089 } 00:20:47.089 EOF 00:20:47.089 )") 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.089 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.089 { 00:20:47.089 "params": { 00:20:47.089 "name": "Nvme$subsystem", 00:20:47.089 "trtype": "$TEST_TRANSPORT", 00:20:47.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.089 "adrfam": "ipv4", 00:20:47.089 "trsvcid": "$NVMF_PORT", 00:20:47.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.090 "hdgst": ${hdgst:-false}, 00:20:47.090 "ddgst": ${ddgst:-false} 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 } 00:20:47.090 EOF 00:20:47.090 )") 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.090 { 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme$subsystem", 00:20:47.090 "trtype": "$TEST_TRANSPORT", 00:20:47.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "$NVMF_PORT", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.090 "hdgst": ${hdgst:-false}, 00:20:47.090 "ddgst": ${ddgst:-false} 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 } 00:20:47.090 EOF 00:20:47.090 )") 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.090 { 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme$subsystem", 00:20:47.090 "trtype": "$TEST_TRANSPORT", 00:20:47.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "$NVMF_PORT", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.090 "hdgst": ${hdgst:-false}, 00:20:47.090 "ddgst": ${ddgst:-false} 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 } 00:20:47.090 EOF 00:20:47.090 )") 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.090 [2024-11-20 16:22:18.173560] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:20:47.090 [2024-11-20 16:22:18.173608] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.090 { 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme$subsystem", 00:20:47.090 "trtype": "$TEST_TRANSPORT", 00:20:47.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "$NVMF_PORT", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.090 "hdgst": ${hdgst:-false}, 00:20:47.090 "ddgst": ${ddgst:-false} 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 } 00:20:47.090 EOF 00:20:47.090 )") 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.090 { 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme$subsystem", 00:20:47.090 "trtype": "$TEST_TRANSPORT", 00:20:47.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "$NVMF_PORT", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.090 "hdgst": ${hdgst:-false}, 00:20:47.090 "ddgst": ${ddgst:-false} 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 } 00:20:47.090 EOF 00:20:47.090 )") 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.090 { 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme$subsystem", 00:20:47.090 "trtype": "$TEST_TRANSPORT", 00:20:47.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "$NVMF_PORT", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.090 "hdgst": ${hdgst:-false}, 00:20:47.090 "ddgst": ${ddgst:-false} 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 } 00:20:47.090 EOF 00:20:47.090 )") 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:47.090 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme1", 00:20:47.090 "trtype": "tcp", 00:20:47.090 "traddr": "10.0.0.2", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "4420", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.090 "hdgst": false, 00:20:47.090 "ddgst": false 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 },{ 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme2", 00:20:47.090 "trtype": "tcp", 00:20:47.090 "traddr": "10.0.0.2", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "4420", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:47.090 "hdgst": false, 00:20:47.090 "ddgst": false 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 },{ 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme3", 00:20:47.090 "trtype": "tcp", 00:20:47.090 "traddr": "10.0.0.2", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "4420", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:47.090 "hdgst": false, 00:20:47.090 "ddgst": false 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 },{ 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme4", 00:20:47.090 "trtype": "tcp", 00:20:47.090 "traddr": "10.0.0.2", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "4420", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:47.090 "hdgst": false, 00:20:47.090 "ddgst": false 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 },{ 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme5", 00:20:47.090 "trtype": "tcp", 00:20:47.090 "traddr": "10.0.0.2", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "4420", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:47.090 "hdgst": false, 00:20:47.090 "ddgst": false 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 },{ 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme6", 00:20:47.090 "trtype": "tcp", 00:20:47.090 "traddr": "10.0.0.2", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "4420", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:47.090 "hdgst": false, 00:20:47.090 "ddgst": false 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 },{ 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme7", 00:20:47.090 "trtype": "tcp", 00:20:47.090 "traddr": "10.0.0.2", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "4420", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:47.090 "hdgst": false, 00:20:47.090 "ddgst": false 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 },{ 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme8", 00:20:47.090 "trtype": "tcp", 00:20:47.090 "traddr": "10.0.0.2", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "4420", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:47.090 "hdgst": false, 00:20:47.090 "ddgst": false 00:20:47.090 }, 00:20:47.090 "method": "bdev_nvme_attach_controller" 00:20:47.090 },{ 00:20:47.090 "params": { 00:20:47.090 "name": "Nvme9", 00:20:47.090 "trtype": "tcp", 00:20:47.090 "traddr": "10.0.0.2", 00:20:47.090 "adrfam": "ipv4", 00:20:47.090 "trsvcid": "4420", 00:20:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:47.090 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:47.090 "hdgst": false, 00:20:47.090 "ddgst": false 00:20:47.090 }, 00:20:47.091 "method": "bdev_nvme_attach_controller" 00:20:47.091 },{ 00:20:47.091 "params": { 00:20:47.091 "name": "Nvme10", 00:20:47.091 "trtype": "tcp", 00:20:47.091 "traddr": "10.0.0.2", 00:20:47.091 "adrfam": "ipv4", 00:20:47.091 "trsvcid": "4420", 00:20:47.091 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:47.091 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:47.091 "hdgst": false, 00:20:47.091 "ddgst": false 00:20:47.091 }, 00:20:47.091 "method": "bdev_nvme_attach_controller" 00:20:47.091 }' 00:20:47.091 [2024-11-20 16:22:18.250512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.091 [2024-11-20 16:22:18.291535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.467 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.467 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:48.467 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:48.467 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.467 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.467 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.467 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1976693 00:20:48.467 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:48.467 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:49.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1976693 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:49.399 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1976551 00:20:49.399 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:49.399 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:49.399 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:49.399 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:49.399 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.399 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.399 { 00:20:49.399 "params": { 00:20:49.399 "name": "Nvme$subsystem", 00:20:49.399 "trtype": "$TEST_TRANSPORT", 00:20:49.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.399 "adrfam": "ipv4", 00:20:49.399 "trsvcid": "$NVMF_PORT", 00:20:49.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.399 "hdgst": ${hdgst:-false}, 00:20:49.399 "ddgst": ${ddgst:-false} 00:20:49.399 }, 00:20:49.399 "method": "bdev_nvme_attach_controller" 00:20:49.399 } 00:20:49.399 EOF 00:20:49.399 )") 00:20:49.399 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.399 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.399 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.399 { 00:20:49.399 "params": { 00:20:49.399 "name": "Nvme$subsystem", 00:20:49.399 "trtype": "$TEST_TRANSPORT", 00:20:49.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.399 "adrfam": "ipv4", 00:20:49.399 "trsvcid": "$NVMF_PORT", 00:20:49.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.399 "hdgst": ${hdgst:-false}, 00:20:49.399 "ddgst": ${ddgst:-false} 00:20:49.399 }, 00:20:49.399 "method": "bdev_nvme_attach_controller" 00:20:49.399 } 00:20:49.399 EOF 00:20:49.399 )") 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.400 { 00:20:49.400 "params": { 00:20:49.400 "name": "Nvme$subsystem", 00:20:49.400 "trtype": "$TEST_TRANSPORT", 00:20:49.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.400 "adrfam": "ipv4", 00:20:49.400 "trsvcid": "$NVMF_PORT", 00:20:49.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.400 "hdgst": ${hdgst:-false}, 00:20:49.400 "ddgst": ${ddgst:-false} 00:20:49.400 }, 00:20:49.400 "method": "bdev_nvme_attach_controller" 00:20:49.400 } 00:20:49.400 EOF 00:20:49.400 )") 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.400 { 00:20:49.400 "params": { 00:20:49.400 "name": "Nvme$subsystem", 00:20:49.400 "trtype": "$TEST_TRANSPORT", 00:20:49.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.400 "adrfam": "ipv4", 00:20:49.400 "trsvcid": "$NVMF_PORT", 00:20:49.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.400 "hdgst": ${hdgst:-false}, 00:20:49.400 "ddgst": ${ddgst:-false} 00:20:49.400 }, 00:20:49.400 "method": "bdev_nvme_attach_controller" 00:20:49.400 } 00:20:49.400 EOF 00:20:49.400 )") 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.400 { 00:20:49.400 "params": { 00:20:49.400 "name": "Nvme$subsystem", 00:20:49.400 "trtype": "$TEST_TRANSPORT", 00:20:49.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.400 "adrfam": "ipv4", 00:20:49.400 "trsvcid": "$NVMF_PORT", 00:20:49.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.400 "hdgst": ${hdgst:-false}, 00:20:49.400 "ddgst": ${ddgst:-false} 00:20:49.400 }, 00:20:49.400 "method": "bdev_nvme_attach_controller" 00:20:49.400 } 00:20:49.400 EOF 00:20:49.400 )") 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.400 { 00:20:49.400 "params": { 00:20:49.400 "name": "Nvme$subsystem", 00:20:49.400 "trtype": "$TEST_TRANSPORT", 00:20:49.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.400 "adrfam": "ipv4", 00:20:49.400 "trsvcid": "$NVMF_PORT", 00:20:49.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.400 "hdgst": ${hdgst:-false}, 00:20:49.400 "ddgst": ${ddgst:-false} 00:20:49.400 }, 00:20:49.400 "method": "bdev_nvme_attach_controller" 00:20:49.400 } 00:20:49.400 EOF 00:20:49.400 )") 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.400 { 00:20:49.400 "params": { 00:20:49.400 "name": "Nvme$subsystem", 00:20:49.400 "trtype": "$TEST_TRANSPORT", 00:20:49.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.400 "adrfam": "ipv4", 00:20:49.400 "trsvcid": "$NVMF_PORT", 00:20:49.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.400 "hdgst": ${hdgst:-false}, 00:20:49.400 "ddgst": ${ddgst:-false} 00:20:49.400 }, 00:20:49.400 "method": "bdev_nvme_attach_controller" 00:20:49.400 } 00:20:49.400 EOF 00:20:49.400 )") 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.400 [2024-11-20 16:22:20.585604] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:20:49.400 [2024-11-20 16:22:20.585655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977176 ] 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.400 { 00:20:49.400 "params": { 00:20:49.400 "name": "Nvme$subsystem", 00:20:49.400 "trtype": "$TEST_TRANSPORT", 00:20:49.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.400 "adrfam": "ipv4", 00:20:49.400 "trsvcid": "$NVMF_PORT", 00:20:49.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.400 "hdgst": ${hdgst:-false}, 00:20:49.400 "ddgst": ${ddgst:-false} 00:20:49.400 }, 00:20:49.400 "method": "bdev_nvme_attach_controller" 00:20:49.400 } 00:20:49.400 EOF 00:20:49.400 )") 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.400 { 00:20:49.400 "params": { 00:20:49.400 "name": "Nvme$subsystem", 00:20:49.400 "trtype": "$TEST_TRANSPORT", 00:20:49.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.400 "adrfam": "ipv4", 00:20:49.400 "trsvcid": "$NVMF_PORT", 00:20:49.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.400 "hdgst": ${hdgst:-false}, 00:20:49.400 "ddgst": ${ddgst:-false} 00:20:49.400 }, 00:20:49.400 "method": "bdev_nvme_attach_controller" 00:20:49.400 } 00:20:49.400 EOF 00:20:49.400 )") 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.400 { 00:20:49.400 "params": { 00:20:49.400 "name": "Nvme$subsystem", 00:20:49.400 "trtype": "$TEST_TRANSPORT", 00:20:49.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.400 "adrfam": "ipv4", 00:20:49.400 "trsvcid": "$NVMF_PORT", 00:20:49.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.400 "hdgst": ${hdgst:-false}, 00:20:49.400 "ddgst": ${ddgst:-false} 00:20:49.400 }, 00:20:49.400 "method": "bdev_nvme_attach_controller" 00:20:49.400 } 00:20:49.400 EOF 00:20:49.400 )") 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:49.400 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:49.400 "params": { 00:20:49.400 "name": "Nvme1", 00:20:49.400 "trtype": "tcp", 00:20:49.400 "traddr": "10.0.0.2", 00:20:49.400 "adrfam": "ipv4", 00:20:49.400 "trsvcid": "4420", 00:20:49.400 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.400 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.400 "hdgst": false, 00:20:49.400 "ddgst": false 00:20:49.400 }, 00:20:49.400 "method": "bdev_nvme_attach_controller" 00:20:49.400 },{ 00:20:49.400 "params": { 00:20:49.400 "name": "Nvme2", 00:20:49.400 "trtype": "tcp", 00:20:49.400 "traddr": "10.0.0.2", 00:20:49.400 "adrfam": "ipv4", 00:20:49.400 "trsvcid": "4420", 00:20:49.400 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:49.400 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:49.400 "hdgst": false, 00:20:49.400 "ddgst": false 00:20:49.400 }, 00:20:49.400 "method": "bdev_nvme_attach_controller" 00:20:49.400 },{ 00:20:49.400 "params": { 00:20:49.400 "name": "Nvme3", 00:20:49.400 "trtype": "tcp", 00:20:49.400 "traddr": "10.0.0.2", 00:20:49.400 "adrfam": "ipv4", 00:20:49.400 "trsvcid": "4420", 00:20:49.400 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:49.400 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:49.400 "hdgst": false, 00:20:49.400 "ddgst": false 00:20:49.400 }, 00:20:49.400 "method": "bdev_nvme_attach_controller" 00:20:49.400 },{ 00:20:49.400 "params": { 00:20:49.400 "name": "Nvme4", 00:20:49.400 "trtype": "tcp", 00:20:49.400 "traddr": "10.0.0.2", 00:20:49.400 "adrfam": "ipv4", 00:20:49.400 "trsvcid": "4420", 00:20:49.400 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:49.401 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:49.401 "hdgst": false, 00:20:49.401 "ddgst": false 00:20:49.401 }, 00:20:49.401 "method": "bdev_nvme_attach_controller" 00:20:49.401 },{ 00:20:49.401 "params": { 00:20:49.401 "name": "Nvme5", 00:20:49.401 "trtype": "tcp", 00:20:49.401 "traddr": "10.0.0.2", 00:20:49.401 "adrfam": "ipv4", 00:20:49.401 "trsvcid": "4420", 00:20:49.401 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:49.401 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:49.401 "hdgst": false, 00:20:49.401 "ddgst": false 00:20:49.401 }, 00:20:49.401 "method": "bdev_nvme_attach_controller" 00:20:49.401 },{ 00:20:49.401 "params": { 00:20:49.401 "name": "Nvme6", 00:20:49.401 "trtype": "tcp", 00:20:49.401 "traddr": "10.0.0.2", 00:20:49.401 "adrfam": "ipv4", 00:20:49.401 "trsvcid": "4420", 00:20:49.401 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:49.401 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:49.401 "hdgst": false, 00:20:49.401 "ddgst": false 00:20:49.401 }, 00:20:49.401 "method": "bdev_nvme_attach_controller" 00:20:49.401 },{ 00:20:49.401 "params": { 00:20:49.401 "name": "Nvme7", 00:20:49.401 "trtype": "tcp", 00:20:49.401 "traddr": "10.0.0.2", 00:20:49.401 "adrfam": "ipv4", 00:20:49.401 "trsvcid": "4420", 00:20:49.401 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:49.401 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:49.401 "hdgst": false, 00:20:49.401 "ddgst": false 00:20:49.401 }, 00:20:49.401 "method": "bdev_nvme_attach_controller" 00:20:49.401 },{ 00:20:49.401 "params": { 00:20:49.401 "name": "Nvme8", 00:20:49.401 "trtype": "tcp", 00:20:49.401 "traddr": "10.0.0.2", 00:20:49.401 "adrfam": "ipv4", 00:20:49.401 "trsvcid": "4420", 00:20:49.401 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:49.401 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:49.401 "hdgst": false, 00:20:49.401 "ddgst": false 00:20:49.401 }, 00:20:49.401 "method": "bdev_nvme_attach_controller" 00:20:49.401 },{ 00:20:49.401 "params": { 00:20:49.401 "name": "Nvme9", 00:20:49.401 "trtype": "tcp", 00:20:49.401 "traddr": "10.0.0.2", 00:20:49.401 "adrfam": "ipv4", 00:20:49.401 "trsvcid": "4420", 00:20:49.401 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:49.401 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:49.401 "hdgst": false, 00:20:49.401 "ddgst": false 00:20:49.401 }, 00:20:49.401 "method": "bdev_nvme_attach_controller" 00:20:49.401 },{ 00:20:49.401 "params": { 00:20:49.401 "name": "Nvme10", 00:20:49.401 "trtype": "tcp", 00:20:49.401 "traddr": "10.0.0.2", 00:20:49.401 "adrfam": "ipv4", 00:20:49.401 "trsvcid": "4420", 00:20:49.401 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:49.401 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:49.401 "hdgst": false, 00:20:49.401 "ddgst": false 00:20:49.401 }, 00:20:49.401 "method": "bdev_nvme_attach_controller" 00:20:49.401 }' 00:20:49.659 [2024-11-20 16:22:20.661909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.659 [2024-11-20 16:22:20.703116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.032 Running I/O for 1 seconds... 00:20:52.228 2253.00 IOPS, 140.81 MiB/s 00:20:52.228 Latency(us) 00:20:52.228 [2024-11-20T15:22:23.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.228 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.228 Verification LBA range: start 0x0 length 0x400 00:20:52.228 Nvme1n1 : 1.14 281.01 17.56 0.00 0.00 225732.32 16352.79 214708.42 00:20:52.228 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.228 Verification LBA range: start 0x0 length 0x400 00:20:52.228 Nvme2n1 : 1.08 237.97 14.87 0.00 0.00 262512.40 16727.28 219701.64 00:20:52.228 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.228 Verification LBA range: start 0x0 length 0x400 00:20:52.228 Nvme3n1 : 1.13 282.15 17.63 0.00 0.00 218499.75 17476.27 211712.49 00:20:52.228 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.228 Verification LBA range: start 0x0 length 0x400 00:20:52.228 Nvme4n1 : 1.08 300.65 18.79 0.00 0.00 200283.59 10735.42 214708.42 00:20:52.228 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.228 Verification LBA range: start 0x0 length 0x400 00:20:52.228 Nvme5n1 : 1.15 279.18 17.45 0.00 0.00 214725.19 16477.62 221698.93 00:20:52.228 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.228 Verification LBA range: start 0x0 length 0x400 00:20:52.228 Nvme6n1 : 1.14 285.31 17.83 0.00 0.00 205736.04 7989.15 208716.56 00:20:52.228 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.228 Verification LBA range: start 0x0 length 0x400 00:20:52.228 Nvme7n1 : 1.12 288.34 18.02 0.00 0.00 197810.84 21346.01 195734.19 00:20:52.228 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.228 Verification LBA range: start 0x0 length 0x400 00:20:52.228 Nvme8n1 : 1.14 279.81 17.49 0.00 0.00 204980.52 12670.29 230686.72 00:20:52.228 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.228 Verification LBA range: start 0x0 length 0x400 00:20:52.228 Nvme9n1 : 1.15 278.07 17.38 0.00 0.00 203344.16 17975.59 224694.86 00:20:52.228 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.228 Verification LBA range: start 0x0 length 0x400 00:20:52.228 Nvme10n1 : 1.16 276.72 17.30 0.00 0.00 200835.46 12170.97 232684.01 00:20:52.228 [2024-11-20T15:22:23.462Z] =================================================================================================================== 00:20:52.228 [2024-11-20T15:22:23.462Z] Total : 2789.21 174.33 0.00 0.00 212398.25 7989.15 232684.01 00:20:52.487 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:52.487 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:52.487 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:52.488 rmmod nvme_tcp 00:20:52.488 rmmod nvme_fabrics 00:20:52.488 rmmod nvme_keyring 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1976551 ']' 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1976551 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1976551 ']' 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1976551 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1976551 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1976551' 00:20:52.488 killing process with pid 1976551 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1976551 00:20:52.488 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1976551 00:20:52.747 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:52.747 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:52.747 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:52.747 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:52.747 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:52.747 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:52.747 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:52.747 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:52.747 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:52.747 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.747 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.747 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:55.286 00:20:55.286 real 0m14.829s 00:20:55.286 user 0m31.494s 00:20:55.286 sys 0m5.838s 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:55.286 ************************************ 00:20:55.286 END TEST nvmf_shutdown_tc1 00:20:55.286 ************************************ 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:55.286 ************************************ 00:20:55.286 START TEST nvmf_shutdown_tc2 00:20:55.286 ************************************ 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.286 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:55.287 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:55.287 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:55.287 Found net devices under 0000:86:00.0: cvl_0_0 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:55.287 Found net devices under 0000:86:00.1: cvl_0_1 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:55.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:20:55.287 00:20:55.287 --- 10.0.0.2 ping statistics --- 00:20:55.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.287 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:20:55.287 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:20:55.288 00:20:55.288 --- 10.0.0.1 ping statistics --- 00:20:55.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.288 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1978204 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1978204 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1978204 ']' 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.288 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.288 [2024-11-20 16:22:26.478554] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:20:55.288 [2024-11-20 16:22:26.478595] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.547 [2024-11-20 16:22:26.542262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.547 [2024-11-20 16:22:26.584682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.547 [2024-11-20 16:22:26.584720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.547 [2024-11-20 16:22:26.584728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.547 [2024-11-20 16:22:26.584734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.547 [2024-11-20 16:22:26.584739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.547 [2024-11-20 16:22:26.586316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.547 [2024-11-20 16:22:26.586428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.547 [2024-11-20 16:22:26.586531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.547 [2024-11-20 16:22:26.586532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.547 [2024-11-20 16:22:26.732136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.547 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:55.548 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.548 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:55.548 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.548 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:55.548 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.548 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:55.548 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.548 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:55.548 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.548 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:55.548 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.548 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:55.806 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.806 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:55.806 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.806 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:55.806 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:55.806 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.806 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.806 Malloc1 00:20:55.806 [2024-11-20 16:22:26.845498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.806 Malloc2 00:20:55.806 Malloc3 00:20:55.806 Malloc4 00:20:55.806 Malloc5 00:20:55.806 Malloc6 00:20:56.064 Malloc7 00:20:56.064 Malloc8 00:20:56.064 Malloc9 00:20:56.064 Malloc10 00:20:56.064 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.064 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:56.064 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.064 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.064 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1978382 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1978382 /var/tmp/bdevperf.sock 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1978382 ']' 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.065 { 00:20:56.065 "params": { 00:20:56.065 "name": "Nvme$subsystem", 00:20:56.065 "trtype": "$TEST_TRANSPORT", 00:20:56.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.065 "adrfam": "ipv4", 00:20:56.065 "trsvcid": "$NVMF_PORT", 00:20:56.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.065 "hdgst": ${hdgst:-false}, 00:20:56.065 "ddgst": ${ddgst:-false} 00:20:56.065 }, 00:20:56.065 "method": "bdev_nvme_attach_controller" 00:20:56.065 } 00:20:56.065 EOF 00:20:56.065 )") 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.065 { 00:20:56.065 "params": { 00:20:56.065 "name": "Nvme$subsystem", 00:20:56.065 "trtype": "$TEST_TRANSPORT", 00:20:56.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.065 "adrfam": "ipv4", 00:20:56.065 "trsvcid": "$NVMF_PORT", 00:20:56.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.065 "hdgst": ${hdgst:-false}, 00:20:56.065 "ddgst": ${ddgst:-false} 00:20:56.065 }, 00:20:56.065 "method": "bdev_nvme_attach_controller" 00:20:56.065 } 00:20:56.065 EOF 00:20:56.065 )") 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.065 { 00:20:56.065 "params": { 00:20:56.065 "name": "Nvme$subsystem", 00:20:56.065 "trtype": "$TEST_TRANSPORT", 00:20:56.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.065 "adrfam": "ipv4", 00:20:56.065 "trsvcid": "$NVMF_PORT", 00:20:56.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.065 "hdgst": ${hdgst:-false}, 00:20:56.065 "ddgst": ${ddgst:-false} 00:20:56.065 }, 00:20:56.065 "method": "bdev_nvme_attach_controller" 00:20:56.065 } 00:20:56.065 EOF 00:20:56.065 )") 00:20:56.065 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.325 { 00:20:56.325 "params": { 00:20:56.325 "name": "Nvme$subsystem", 00:20:56.325 "trtype": "$TEST_TRANSPORT", 00:20:56.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.325 "adrfam": "ipv4", 00:20:56.325 "trsvcid": "$NVMF_PORT", 00:20:56.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.325 "hdgst": ${hdgst:-false}, 00:20:56.325 "ddgst": ${ddgst:-false} 00:20:56.325 }, 00:20:56.325 "method": "bdev_nvme_attach_controller" 00:20:56.325 } 00:20:56.325 EOF 00:20:56.325 )") 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.325 { 00:20:56.325 "params": { 00:20:56.325 "name": "Nvme$subsystem", 00:20:56.325 "trtype": "$TEST_TRANSPORT", 00:20:56.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.325 "adrfam": "ipv4", 00:20:56.325 "trsvcid": "$NVMF_PORT", 00:20:56.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.325 "hdgst": ${hdgst:-false}, 00:20:56.325 "ddgst": ${ddgst:-false} 00:20:56.325 }, 00:20:56.325 "method": "bdev_nvme_attach_controller" 00:20:56.325 } 00:20:56.325 EOF 00:20:56.325 )") 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.325 { 00:20:56.325 "params": { 00:20:56.325 "name": "Nvme$subsystem", 00:20:56.325 "trtype": "$TEST_TRANSPORT", 00:20:56.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.325 "adrfam": "ipv4", 00:20:56.325 "trsvcid": "$NVMF_PORT", 00:20:56.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.325 "hdgst": ${hdgst:-false}, 00:20:56.325 "ddgst": ${ddgst:-false} 00:20:56.325 }, 00:20:56.325 "method": "bdev_nvme_attach_controller" 00:20:56.325 } 00:20:56.325 EOF 00:20:56.325 )") 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.325 { 00:20:56.325 "params": { 00:20:56.325 "name": "Nvme$subsystem", 00:20:56.325 "trtype": "$TEST_TRANSPORT", 00:20:56.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.325 "adrfam": "ipv4", 00:20:56.325 "trsvcid": "$NVMF_PORT", 00:20:56.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.325 "hdgst": ${hdgst:-false}, 00:20:56.325 "ddgst": ${ddgst:-false} 00:20:56.325 }, 00:20:56.325 "method": "bdev_nvme_attach_controller" 00:20:56.325 } 00:20:56.325 EOF 00:20:56.325 )") 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:56.325 [2024-11-20 16:22:27.322491] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:20:56.325 [2024-11-20 16:22:27.322541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978382 ] 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.325 { 00:20:56.325 "params": { 00:20:56.325 "name": "Nvme$subsystem", 00:20:56.325 "trtype": "$TEST_TRANSPORT", 00:20:56.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.325 "adrfam": "ipv4", 00:20:56.325 "trsvcid": "$NVMF_PORT", 00:20:56.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.325 "hdgst": ${hdgst:-false}, 00:20:56.325 "ddgst": ${ddgst:-false} 00:20:56.325 }, 00:20:56.325 "method": "bdev_nvme_attach_controller" 00:20:56.325 } 00:20:56.325 EOF 00:20:56.325 )") 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.325 { 00:20:56.325 "params": { 00:20:56.325 "name": "Nvme$subsystem", 00:20:56.325 "trtype": "$TEST_TRANSPORT", 00:20:56.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.325 "adrfam": "ipv4", 00:20:56.325 "trsvcid": "$NVMF_PORT", 00:20:56.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.325 "hdgst": ${hdgst:-false}, 00:20:56.325 "ddgst": ${ddgst:-false} 00:20:56.325 }, 00:20:56.325 "method": "bdev_nvme_attach_controller" 00:20:56.325 } 00:20:56.325 EOF 00:20:56.325 )") 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.325 { 00:20:56.325 "params": { 00:20:56.325 "name": "Nvme$subsystem", 00:20:56.325 "trtype": "$TEST_TRANSPORT", 00:20:56.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.325 "adrfam": "ipv4", 00:20:56.325 "trsvcid": "$NVMF_PORT", 00:20:56.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.325 "hdgst": ${hdgst:-false}, 00:20:56.325 "ddgst": ${ddgst:-false} 00:20:56.325 }, 00:20:56.325 "method": "bdev_nvme_attach_controller" 00:20:56.325 } 00:20:56.325 EOF 00:20:56.325 )") 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:56.325 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:56.326 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:56.326 "params": { 00:20:56.326 "name": "Nvme1", 00:20:56.326 "trtype": "tcp", 00:20:56.326 "traddr": "10.0.0.2", 00:20:56.326 "adrfam": "ipv4", 00:20:56.326 "trsvcid": "4420", 00:20:56.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.326 "hdgst": false, 00:20:56.326 "ddgst": false 00:20:56.326 }, 00:20:56.326 "method": "bdev_nvme_attach_controller" 00:20:56.326 },{ 00:20:56.326 "params": { 00:20:56.326 "name": "Nvme2", 00:20:56.326 "trtype": "tcp", 00:20:56.326 "traddr": "10.0.0.2", 00:20:56.326 "adrfam": "ipv4", 00:20:56.326 "trsvcid": "4420", 00:20:56.326 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:56.326 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:56.326 "hdgst": false, 00:20:56.326 "ddgst": false 00:20:56.326 }, 00:20:56.326 "method": "bdev_nvme_attach_controller" 00:20:56.326 },{ 00:20:56.326 "params": { 00:20:56.326 "name": "Nvme3", 00:20:56.326 "trtype": "tcp", 00:20:56.326 "traddr": "10.0.0.2", 00:20:56.326 "adrfam": "ipv4", 00:20:56.326 "trsvcid": "4420", 00:20:56.326 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:56.326 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:56.326 "hdgst": false, 00:20:56.326 "ddgst": false 00:20:56.326 }, 00:20:56.326 "method": "bdev_nvme_attach_controller" 00:20:56.326 },{ 00:20:56.326 "params": { 00:20:56.326 "name": "Nvme4", 00:20:56.326 "trtype": "tcp", 00:20:56.326 "traddr": "10.0.0.2", 00:20:56.326 "adrfam": "ipv4", 00:20:56.326 "trsvcid": "4420", 00:20:56.326 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:56.326 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:56.326 "hdgst": false, 00:20:56.326 "ddgst": false 00:20:56.326 }, 00:20:56.326 "method": "bdev_nvme_attach_controller" 00:20:56.326 },{ 00:20:56.326 "params": { 00:20:56.326 "name": "Nvme5", 00:20:56.326 "trtype": "tcp", 00:20:56.326 "traddr": "10.0.0.2", 00:20:56.326 "adrfam": "ipv4", 00:20:56.326 "trsvcid": "4420", 00:20:56.326 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:56.326 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:56.326 "hdgst": false, 00:20:56.326 "ddgst": false 00:20:56.326 }, 00:20:56.326 "method": "bdev_nvme_attach_controller" 00:20:56.326 },{ 00:20:56.326 "params": { 00:20:56.326 "name": "Nvme6", 00:20:56.326 "trtype": "tcp", 00:20:56.326 "traddr": "10.0.0.2", 00:20:56.326 "adrfam": "ipv4", 00:20:56.326 "trsvcid": "4420", 00:20:56.326 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:56.326 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:56.326 "hdgst": false, 00:20:56.326 "ddgst": false 00:20:56.326 }, 00:20:56.326 "method": "bdev_nvme_attach_controller" 00:20:56.326 },{ 00:20:56.326 "params": { 00:20:56.326 "name": "Nvme7", 00:20:56.326 "trtype": "tcp", 00:20:56.326 "traddr": "10.0.0.2", 00:20:56.326 "adrfam": "ipv4", 00:20:56.326 "trsvcid": "4420", 00:20:56.326 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:56.326 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:56.326 "hdgst": false, 00:20:56.326 "ddgst": false 00:20:56.326 }, 00:20:56.326 "method": "bdev_nvme_attach_controller" 00:20:56.326 },{ 00:20:56.326 "params": { 00:20:56.326 "name": "Nvme8", 00:20:56.326 "trtype": "tcp", 00:20:56.326 "traddr": "10.0.0.2", 00:20:56.326 "adrfam": "ipv4", 00:20:56.326 "trsvcid": "4420", 00:20:56.326 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:56.326 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:56.326 "hdgst": false, 00:20:56.326 "ddgst": false 00:20:56.326 }, 00:20:56.326 "method": "bdev_nvme_attach_controller" 00:20:56.326 },{ 00:20:56.326 "params": { 00:20:56.326 "name": "Nvme9", 00:20:56.326 "trtype": "tcp", 00:20:56.326 "traddr": "10.0.0.2", 00:20:56.326 "adrfam": "ipv4", 00:20:56.326 "trsvcid": "4420", 00:20:56.326 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:56.326 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:56.326 "hdgst": false, 00:20:56.326 "ddgst": false 00:20:56.326 }, 00:20:56.326 "method": "bdev_nvme_attach_controller" 00:20:56.326 },{ 00:20:56.326 "params": { 00:20:56.326 "name": "Nvme10", 00:20:56.326 "trtype": "tcp", 00:20:56.326 "traddr": "10.0.0.2", 00:20:56.326 "adrfam": "ipv4", 00:20:56.326 "trsvcid": "4420", 00:20:56.326 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:56.326 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:56.326 "hdgst": false, 00:20:56.326 "ddgst": false 00:20:56.326 }, 00:20:56.326 "method": "bdev_nvme_attach_controller" 00:20:56.326 }' 00:20:56.326 [2024-11-20 16:22:27.401279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.326 [2024-11-20 16:22:27.442553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.703 Running I/O for 10 seconds... 00:20:58.270 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.270 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:58.270 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:58.270 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.270 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.270 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.270 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:58.270 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:58.270 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:58.270 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:58.270 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:58.270 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:58.270 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:58.271 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:58.271 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:58.271 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.271 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.271 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.271 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:58.271 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:58.271 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:58.529 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:58.529 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:58.529 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:58.529 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:58.529 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1978382 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1978382 ']' 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1978382 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1978382 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1978382' 00:20:58.530 killing process with pid 1978382 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1978382 00:20:58.530 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1978382 00:20:58.530 Received shutdown signal, test time was about 0.803616 seconds 00:20:58.530 00:20:58.530 Latency(us) 00:20:58.530 [2024-11-20T15:22:29.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.530 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.530 Verification LBA range: start 0x0 length 0x400 00:20:58.530 Nvme1n1 : 0.80 319.97 20.00 0.00 0.00 197534.48 17476.27 196732.83 00:20:58.530 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.530 Verification LBA range: start 0x0 length 0x400 00:20:58.530 Nvme2n1 : 0.77 253.36 15.83 0.00 0.00 243485.67 3058.35 211712.49 00:20:58.530 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.530 Verification LBA range: start 0x0 length 0x400 00:20:58.530 Nvme3n1 : 0.79 322.47 20.15 0.00 0.00 188011.15 15166.90 214708.42 00:20:58.530 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.530 Verification LBA range: start 0x0 length 0x400 00:20:58.530 Nvme4n1 : 0.80 321.18 20.07 0.00 0.00 185159.56 14355.50 215707.06 00:20:58.530 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.530 Verification LBA range: start 0x0 length 0x400 00:20:58.530 Nvme5n1 : 0.78 246.19 15.39 0.00 0.00 236166.58 18474.91 212711.13 00:20:58.530 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.530 Verification LBA range: start 0x0 length 0x400 00:20:58.530 Nvme6n1 : 0.77 265.91 16.62 0.00 0.00 210383.89 8800.55 214708.42 00:20:58.530 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.530 Verification LBA range: start 0x0 length 0x400 00:20:58.530 Nvme7n1 : 0.80 318.85 19.93 0.00 0.00 175134.11 13107.20 228689.43 00:20:58.530 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.530 Verification LBA range: start 0x0 length 0x400 00:20:58.530 Nvme8n1 : 0.78 250.58 15.66 0.00 0.00 215516.16 5211.67 214708.42 00:20:58.530 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.530 Verification LBA range: start 0x0 length 0x400 00:20:58.530 Nvme9n1 : 0.79 242.52 15.16 0.00 0.00 219723.74 18225.25 237677.23 00:20:58.530 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.530 Verification LBA range: start 0x0 length 0x400 00:20:58.530 Nvme10n1 : 0.79 243.87 15.24 0.00 0.00 213244.59 16976.94 215707.06 00:20:58.530 [2024-11-20T15:22:29.764Z] =================================================================================================================== 00:20:58.530 [2024-11-20T15:22:29.764Z] Total : 2784.90 174.06 0.00 0.00 205965.52 3058.35 237677.23 00:20:58.789 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:59.723 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1978204 00:20:59.723 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:59.723 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:59.723 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:59.723 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:59.723 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:59.723 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.723 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:59.723 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.723 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:59.723 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.723 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.723 rmmod nvme_tcp 00:20:59.723 rmmod nvme_fabrics 00:20:59.982 rmmod nvme_keyring 00:20:59.982 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.982 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:59.982 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:59.982 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1978204 ']' 00:20:59.982 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1978204 00:20:59.982 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1978204 ']' 00:20:59.982 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1978204 00:20:59.982 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:59.982 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.982 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1978204 00:20:59.982 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:59.982 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:59.982 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1978204' 00:20:59.982 killing process with pid 1978204 00:20:59.982 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1978204 00:20:59.982 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1978204 00:21:00.241 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:00.241 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:00.241 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:00.241 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:00.241 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:00.241 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:00.241 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:00.241 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:00.241 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:00.241 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.241 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.241 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:02.782 00:21:02.782 real 0m7.353s 00:21:02.782 user 0m21.636s 00:21:02.782 sys 0m1.320s 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.782 ************************************ 00:21:02.782 END TEST nvmf_shutdown_tc2 00:21:02.782 ************************************ 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:02.782 ************************************ 00:21:02.782 START TEST nvmf_shutdown_tc3 00:21:02.782 ************************************ 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:02.782 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.782 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:02.783 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:02.783 Found net devices under 0000:86:00.0: cvl_0_0 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:02.783 Found net devices under 0000:86:00.1: cvl_0_1 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:02.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:21:02.783 00:21:02.783 --- 10.0.0.2 ping statistics --- 00:21:02.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.783 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:21:02.783 00:21:02.783 --- 10.0.0.1 ping statistics --- 00:21:02.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.783 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1979517 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1979517 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1979517 ']' 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.783 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.783 [2024-11-20 16:22:33.926165] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:21:02.783 [2024-11-20 16:22:33.926218] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.043 [2024-11-20 16:22:34.020959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:03.043 [2024-11-20 16:22:34.063229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.043 [2024-11-20 16:22:34.063266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.043 [2024-11-20 16:22:34.063273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.043 [2024-11-20 16:22:34.063279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.043 [2024-11-20 16:22:34.063283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.043 [2024-11-20 16:22:34.064883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.043 [2024-11-20 16:22:34.064993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.043 [2024-11-20 16:22:34.065076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.043 [2024-11-20 16:22:34.065077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:03.610 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.611 [2024-11-20 16:22:34.801166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.611 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:03.869 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.869 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:03.870 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.870 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:03.870 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.870 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:03.870 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.870 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:03.870 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:03.870 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.870 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.870 Malloc1 00:21:03.870 [2024-11-20 16:22:34.909990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.870 Malloc2 00:21:03.870 Malloc3 00:21:03.870 Malloc4 00:21:03.870 Malloc5 00:21:03.870 Malloc6 00:21:04.129 Malloc7 00:21:04.129 Malloc8 00:21:04.129 Malloc9 00:21:04.129 Malloc10 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1979800 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1979800 /var/tmp/bdevperf.sock 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1979800 ']' 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.129 { 00:21:04.129 "params": { 00:21:04.129 "name": "Nvme$subsystem", 00:21:04.129 "trtype": "$TEST_TRANSPORT", 00:21:04.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.129 "adrfam": "ipv4", 00:21:04.129 "trsvcid": "$NVMF_PORT", 00:21:04.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.129 "hdgst": ${hdgst:-false}, 00:21:04.129 "ddgst": ${ddgst:-false} 00:21:04.129 }, 00:21:04.129 "method": "bdev_nvme_attach_controller" 00:21:04.129 } 00:21:04.129 EOF 00:21:04.129 )") 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.129 { 00:21:04.129 "params": { 00:21:04.129 "name": "Nvme$subsystem", 00:21:04.129 "trtype": "$TEST_TRANSPORT", 00:21:04.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.129 "adrfam": "ipv4", 00:21:04.129 "trsvcid": "$NVMF_PORT", 00:21:04.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.129 "hdgst": ${hdgst:-false}, 00:21:04.129 "ddgst": ${ddgst:-false} 00:21:04.129 }, 00:21:04.129 "method": "bdev_nvme_attach_controller" 00:21:04.129 } 00:21:04.129 EOF 00:21:04.129 )") 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.129 { 00:21:04.129 "params": { 00:21:04.129 "name": "Nvme$subsystem", 00:21:04.129 "trtype": "$TEST_TRANSPORT", 00:21:04.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.129 "adrfam": "ipv4", 00:21:04.129 "trsvcid": "$NVMF_PORT", 00:21:04.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.129 "hdgst": ${hdgst:-false}, 00:21:04.129 "ddgst": ${ddgst:-false} 00:21:04.129 }, 00:21:04.129 "method": "bdev_nvme_attach_controller" 00:21:04.129 } 00:21:04.129 EOF 00:21:04.129 )") 00:21:04.129 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:04.388 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.388 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.388 { 00:21:04.388 "params": { 00:21:04.388 "name": "Nvme$subsystem", 00:21:04.388 "trtype": "$TEST_TRANSPORT", 00:21:04.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.388 "adrfam": "ipv4", 00:21:04.388 "trsvcid": "$NVMF_PORT", 00:21:04.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.388 "hdgst": ${hdgst:-false}, 00:21:04.388 "ddgst": ${ddgst:-false} 00:21:04.388 }, 00:21:04.388 "method": "bdev_nvme_attach_controller" 00:21:04.388 } 00:21:04.388 EOF 00:21:04.388 )") 00:21:04.388 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:04.388 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.388 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.388 { 00:21:04.388 "params": { 00:21:04.388 "name": "Nvme$subsystem", 00:21:04.388 "trtype": "$TEST_TRANSPORT", 00:21:04.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.388 "adrfam": "ipv4", 00:21:04.388 "trsvcid": "$NVMF_PORT", 00:21:04.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.388 "hdgst": ${hdgst:-false}, 00:21:04.388 "ddgst": ${ddgst:-false} 00:21:04.388 }, 00:21:04.388 "method": "bdev_nvme_attach_controller" 00:21:04.388 } 00:21:04.388 EOF 00:21:04.388 )") 00:21:04.388 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:04.388 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.388 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.388 { 00:21:04.388 "params": { 00:21:04.388 "name": "Nvme$subsystem", 00:21:04.388 "trtype": "$TEST_TRANSPORT", 00:21:04.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.388 "adrfam": "ipv4", 00:21:04.388 "trsvcid": "$NVMF_PORT", 00:21:04.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.388 "hdgst": ${hdgst:-false}, 00:21:04.388 "ddgst": ${ddgst:-false} 00:21:04.388 }, 00:21:04.388 "method": "bdev_nvme_attach_controller" 00:21:04.388 } 00:21:04.388 EOF 00:21:04.388 )") 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.389 { 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme$subsystem", 00:21:04.389 "trtype": "$TEST_TRANSPORT", 00:21:04.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "$NVMF_PORT", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.389 "hdgst": ${hdgst:-false}, 00:21:04.389 "ddgst": ${ddgst:-false} 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 } 00:21:04.389 EOF 00:21:04.389 )") 00:21:04.389 [2024-11-20 16:22:35.383607] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:21:04.389 [2024-11-20 16:22:35.383655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979800 ] 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.389 { 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme$subsystem", 00:21:04.389 "trtype": "$TEST_TRANSPORT", 00:21:04.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "$NVMF_PORT", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.389 "hdgst": ${hdgst:-false}, 00:21:04.389 "ddgst": ${ddgst:-false} 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 } 00:21:04.389 EOF 00:21:04.389 )") 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.389 { 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme$subsystem", 00:21:04.389 "trtype": "$TEST_TRANSPORT", 00:21:04.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "$NVMF_PORT", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.389 "hdgst": ${hdgst:-false}, 00:21:04.389 "ddgst": ${ddgst:-false} 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 } 00:21:04.389 EOF 00:21:04.389 )") 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.389 { 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme$subsystem", 00:21:04.389 "trtype": "$TEST_TRANSPORT", 00:21:04.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "$NVMF_PORT", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.389 "hdgst": ${hdgst:-false}, 00:21:04.389 "ddgst": ${ddgst:-false} 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 } 00:21:04.389 EOF 00:21:04.389 )") 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:04.389 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme1", 00:21:04.389 "trtype": "tcp", 00:21:04.389 "traddr": "10.0.0.2", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "4420", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.389 "hdgst": false, 00:21:04.389 "ddgst": false 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 },{ 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme2", 00:21:04.389 "trtype": "tcp", 00:21:04.389 "traddr": "10.0.0.2", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "4420", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:04.389 "hdgst": false, 00:21:04.389 "ddgst": false 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 },{ 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme3", 00:21:04.389 "trtype": "tcp", 00:21:04.389 "traddr": "10.0.0.2", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "4420", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:04.389 "hdgst": false, 00:21:04.389 "ddgst": false 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 },{ 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme4", 00:21:04.389 "trtype": "tcp", 00:21:04.389 "traddr": "10.0.0.2", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "4420", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:04.389 "hdgst": false, 00:21:04.389 "ddgst": false 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 },{ 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme5", 00:21:04.389 "trtype": "tcp", 00:21:04.389 "traddr": "10.0.0.2", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "4420", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:04.389 "hdgst": false, 00:21:04.389 "ddgst": false 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 },{ 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme6", 00:21:04.389 "trtype": "tcp", 00:21:04.389 "traddr": "10.0.0.2", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "4420", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:04.389 "hdgst": false, 00:21:04.389 "ddgst": false 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 },{ 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme7", 00:21:04.389 "trtype": "tcp", 00:21:04.389 "traddr": "10.0.0.2", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "4420", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:04.389 "hdgst": false, 00:21:04.389 "ddgst": false 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 },{ 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme8", 00:21:04.389 "trtype": "tcp", 00:21:04.389 "traddr": "10.0.0.2", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "4420", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:04.389 "hdgst": false, 00:21:04.389 "ddgst": false 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 },{ 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme9", 00:21:04.389 "trtype": "tcp", 00:21:04.389 "traddr": "10.0.0.2", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "4420", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:04.389 "hdgst": false, 00:21:04.389 "ddgst": false 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 },{ 00:21:04.389 "params": { 00:21:04.389 "name": "Nvme10", 00:21:04.389 "trtype": "tcp", 00:21:04.389 "traddr": "10.0.0.2", 00:21:04.389 "adrfam": "ipv4", 00:21:04.389 "trsvcid": "4420", 00:21:04.389 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:04.389 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:04.389 "hdgst": false, 00:21:04.389 "ddgst": false 00:21:04.389 }, 00:21:04.389 "method": "bdev_nvme_attach_controller" 00:21:04.389 }' 00:21:04.389 [2024-11-20 16:22:35.460130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.389 [2024-11-20 16:22:35.501053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.292 Running I/O for 10 seconds... 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:06.292 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:06.293 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.293 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:06.293 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.293 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:06.293 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:06.293 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:06.551 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:06.551 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:06.551 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:06.551 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:06.551 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.551 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:06.551 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.551 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:06.551 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:06.551 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:06.815 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:06.815 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:06.815 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:06.815 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:06.815 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.815 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1979517 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1979517 ']' 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1979517 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1979517 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1979517' 00:21:07.095 killing process with pid 1979517 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1979517 00:21:07.095 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1979517 00:21:07.095 [2024-11-20 16:22:38.130250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.095 [2024-11-20 16:22:38.130527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.130711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbfe30 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.132912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.132945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.132953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.132960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.132966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.132973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.132983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.132989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.132996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.096 [2024-11-20 16:22:38.133172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.133348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb481f0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.097 [2024-11-20 16:22:38.134775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.134862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb486e0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.098 [2024-11-20 16:22:38.135999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48bb0 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.136994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.099 [2024-11-20 16:22:38.137210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.137216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49080 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.100 [2024-11-20 16:22:38.138557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.101 [2024-11-20 16:22:38.138563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.101 [2024-11-20 16:22:38.138570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.101 [2024-11-20 16:22:38.138576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.101 [2024-11-20 16:22:38.138582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49550 is same with the state(6) to be set 00:21:07.101 [2024-11-20 16:22:38.138677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ed1e0 is same with the state(6) to be set 00:21:07.101 [2024-11-20 16:22:38.138787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f82c0 is same with the state(6) to be set 00:21:07.101 [2024-11-20 16:22:38.138871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb250a0 is same with the state(6) to be set 00:21:07.101 [2024-11-20 16:22:38.138955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.138991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.138999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.139005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.139012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60d610 is same with the state(6) to be set 00:21:07.101 [2024-11-20 16:22:38.139036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.139044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.139052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.139058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.139065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.139071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.139079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.139091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.139098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb56370 is same with the state(6) to be set 00:21:07.101 [2024-11-20 16:22:38.139137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.139146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.139154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.139160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.139167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.139174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.139182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.139188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.139194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4b550 is same with the state(6) to be set 00:21:07.101 [2024-11-20 16:22:38.139226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.139234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.139242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.139249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.139256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.139263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.139270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.101 [2024-11-20 16:22:38.139277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.101 [2024-11-20 16:22:38.139283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f91b0 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.102 [2024-11-20 16:22:38.139313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.102 [2024-11-20 16:22:38.139328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.102 [2024-11-20 16:22:38.139342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.102 [2024-11-20 16:22:38.139358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24860 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 [2024-11-20 16:22:38.139636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:1[2024-11-20 16:22:38.139647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 he state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:1[2024-11-20 16:22:38.139672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 he state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:1[2024-11-20 16:22:38.139691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 he state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 [2024-11-20 16:22:38.139711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 [2024-11-20 16:22:38.139734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 [2024-11-20 16:22:38.139751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 [2024-11-20 16:22:38.139766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 [2024-11-20 16:22:38.139781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:1[2024-11-20 16:22:38.139796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 he state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 16:22:38.139805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 he state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 [2024-11-20 16:22:38.139821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 [2024-11-20 16:22:38.139835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 [2024-11-20 16:22:38.139858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 [2024-11-20 16:22:38.139874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.102 [2024-11-20 16:22:38.139882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.102 [2024-11-20 16:22:38.139889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.102 [2024-11-20 16:22:38.139895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 [2024-11-20 16:22:38.139896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.139903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.139904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 [2024-11-20 16:22:38.139910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.139912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 [2024-11-20 16:22:38.139918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.139922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 [2024-11-20 16:22:38.139925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.139929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 [2024-11-20 16:22:38.139932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.139938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 [2024-11-20 16:22:38.139939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.139946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 16:22:38.139947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 he state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.139955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with t[2024-11-20 16:22:38.139956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:1he state(6) to be set 00:21:07.103 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 [2024-11-20 16:22:38.139965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.139968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 [2024-11-20 16:22:38.139973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.139977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 [2024-11-20 16:22:38.139980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.139985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 [2024-11-20 16:22:38.139987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.139994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with t[2024-11-20 16:22:38.139994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1he state(6) to be set 00:21:07.103 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 [2024-11-20 16:22:38.140004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 [2024-11-20 16:22:38.140011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 [2024-11-20 16:22:38.140018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 [2024-11-20 16:22:38.140025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1[2024-11-20 16:22:38.140034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 he state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 16:22:38.140043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 he state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 [2024-11-20 16:22:38.140061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 [2024-11-20 16:22:38.140068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 [2024-11-20 16:22:38.140078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 [2024-11-20 16:22:38.140085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 [2024-11-20 16:22:38.140093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 [2024-11-20 16:22:38.140100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-11-20 16:22:38.140107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 he state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 16:22:38.140115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 he state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 [2024-11-20 16:22:38.140131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 [2024-11-20 16:22:38.140139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 [2024-11-20 16:22:38.140146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a40 is same with the state(6) to be set 00:21:07.103 [2024-11-20 16:22:38.140153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.103 [2024-11-20 16:22:38.140162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.103 [2024-11-20 16:22:38.140169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.104 [2024-11-20 16:22:38.140664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.104 [2024-11-20 16:22:38.140672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.140678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.140686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.140693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.140719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:07.105 [2024-11-20 16:22:38.141755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.141990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.141997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.105 [2024-11-20 16:22:38.142238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.105 [2024-11-20 16:22:38.142245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.142256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.142263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.142271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.142277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.142286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.142293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.142301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.142307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.142315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.142322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.142331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.142338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.142346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.142352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.142361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.155918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.155947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.155957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.155968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.155977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.155988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.155997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.156008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.156023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.156033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.156043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.156055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.156066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.156078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.156086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.156098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.156108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.156122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.156131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.156143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.156152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.156165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.156174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.156185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.156195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.156210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.156220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.156231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.156241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.156253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.106 [2024-11-20 16:22:38.156262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.106 [2024-11-20 16:22:38.156274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.156283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.156307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.156328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.156349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.156371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.156393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.156414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.156435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.156455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.156476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:07.107 [2024-11-20 16:22:38.156716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ed1e0 (9): Bad file descriptor 00:21:07.107 [2024-11-20 16:22:38.156745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f82c0 (9): Bad file descriptor 00:21:07.107 [2024-11-20 16:22:38.156766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb250a0 (9): Bad file descriptor 00:21:07.107 [2024-11-20 16:22:38.156787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60d610 (9): Bad file descriptor 00:21:07.107 [2024-11-20 16:22:38.156808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb56370 (9): Bad file descriptor 00:21:07.107 [2024-11-20 16:22:38.156843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.107 [2024-11-20 16:22:38.156861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.107 [2024-11-20 16:22:38.156882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.107 [2024-11-20 16:22:38.156902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.107 [2024-11-20 16:22:38.156921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19e20 is same with the state(6) to be set 00:21:07.107 [2024-11-20 16:22:38.156963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.107 [2024-11-20 16:22:38.156975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.156985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.107 [2024-11-20 16:22:38.156995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.157004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.107 [2024-11-20 16:22:38.157015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.157025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.107 [2024-11-20 16:22:38.157034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.157043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57200 is same with the state(6) to be set 00:21:07.107 [2024-11-20 16:22:38.157057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4b550 (9): Bad file descriptor 00:21:07.107 [2024-11-20 16:22:38.157080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f91b0 (9): Bad file descriptor 00:21:07.107 [2024-11-20 16:22:38.157097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb24860 (9): Bad file descriptor 00:21:07.107 [2024-11-20 16:22:38.158424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.158451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.158467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.158478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.158489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.158499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.158516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.158526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.158537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.158547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.158558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.158568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.158579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.158589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.158600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.158610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.158621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.107 [2024-11-20 16:22:38.158631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.107 [2024-11-20 16:22:38.158644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.158980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.158989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.108 [2024-11-20 16:22:38.159319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.108 [2024-11-20 16:22:38.159332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.159802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.109 [2024-11-20 16:22:38.159811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.109 [2024-11-20 16:22:38.162005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:07.109 [2024-11-20 16:22:38.164218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:07.109 [2024-11-20 16:22:38.164259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:07.109 [2024-11-20 16:22:38.164279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb57200 (9): Bad file descriptor 00:21:07.109 [2024-11-20 16:22:38.164502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.109 [2024-11-20 16:22:38.164528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f91b0 with addr=10.0.0.2, port=4420 00:21:07.109 [2024-11-20 16:22:38.164544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f91b0 is same with the state(6) to be set 00:21:07.109 [2024-11-20 16:22:38.165768] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.109 [2024-11-20 16:22:38.165911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.109 [2024-11-20 16:22:38.165939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ed1e0 with addr=10.0.0.2, port=4420 00:21:07.109 [2024-11-20 16:22:38.165954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ed1e0 is same with the state(6) to be set 00:21:07.109 [2024-11-20 16:22:38.165988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f91b0 (9): Bad file descriptor 00:21:07.109 [2024-11-20 16:22:38.166453] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.109 [2024-11-20 16:22:38.166521] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.109 [2024-11-20 16:22:38.166584] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.109 [2024-11-20 16:22:38.166647] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.109 [2024-11-20 16:22:38.166723] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.109 [2024-11-20 16:22:38.166786] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.109 [2024-11-20 16:22:38.166946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.109 [2024-11-20 16:22:38.166971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb57200 with addr=10.0.0.2, port=4420 00:21:07.109 [2024-11-20 16:22:38.166986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57200 is same with the state(6) to be set 00:21:07.109 [2024-11-20 16:22:38.167005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ed1e0 (9): Bad file descriptor 00:21:07.109 [2024-11-20 16:22:38.167022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:07.109 [2024-11-20 16:22:38.167035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:07.109 [2024-11-20 16:22:38.167048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:07.109 [2024-11-20 16:22:38.167063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:07.109 [2024-11-20 16:22:38.167137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb19e20 (9): Bad file descriptor 00:21:07.109 [2024-11-20 16:22:38.167367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb57200 (9): Bad file descriptor 00:21:07.110 [2024-11-20 16:22:38.167390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:07.110 [2024-11-20 16:22:38.167403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:07.110 [2024-11-20 16:22:38.167416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:07.110 [2024-11-20 16:22:38.167430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:07.110 [2024-11-20 16:22:38.167502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.167971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.167987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.110 [2024-11-20 16:22:38.168428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.110 [2024-11-20 16:22:38.168445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.168974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.168988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.169004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.169020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.169037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.169052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.169069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.169082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.169098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.169111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.169127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.169140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.169155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.169168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.169185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.111 [2024-11-20 16:22:38.169198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.111 [2024-11-20 16:22:38.169218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.169231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.169246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.169259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.169276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.169288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.169304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.169318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.169333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.169347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.169363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.169375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.169391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.169404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.169421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae7310 is same with the state(6) to be set 00:21:07.112 [2024-11-20 16:22:38.170867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.170884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.170897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.170906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.170919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.170929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.170941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.170951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.170961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.170970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.170981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.170990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-11-20 16:22:38.171266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.112 [2024-11-20 16:22:38.171276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.113 [2024-11-20 16:22:38.171791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.113 [2024-11-20 16:22:38.171800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.171812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.171820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.171831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.171839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.171850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.171861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.171872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.171880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.171890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.171899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.171909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.171918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.171930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.171938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.171949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.171957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.171968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.171976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.171987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.171997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.172007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.172016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.172026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.172035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.172046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.172054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.172064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.172073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.172083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.172092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.172105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.172114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.172125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.172133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.172142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf4420 is same with the state(6) to be set 00:21:07.114 [2024-11-20 16:22:38.173320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.173336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.173350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.173358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.173371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.173379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.173390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.173398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.173409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.173419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.173430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.173439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.173449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.173459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.173470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.173480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.173492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.173500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.173511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.114 [2024-11-20 16:22:38.173519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.114 [2024-11-20 16:22:38.173533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.173982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.173992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.174001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.174012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.174021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.174034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.174043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.174055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.174063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.174074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.174082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.174092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.174101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.174111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.174120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.174131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.174139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.115 [2024-11-20 16:22:38.174150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.115 [2024-11-20 16:22:38.174159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.174590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.174599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f9f0 is same with the state(6) to be set 00:21:07.116 [2024-11-20 16:22:38.175761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.175777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.175791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.175799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.175810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.175819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.175830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.175838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.175849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.175857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.175869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.116 [2024-11-20 16:22:38.175876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.116 [2024-11-20 16:22:38.175887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.175895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.175907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.175915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.175925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.175937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.175948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.175956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.175967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.175975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.175986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.175994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.117 [2024-11-20 16:22:38.176317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.117 [2024-11-20 16:22:38.176328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.118 [2024-11-20 16:22:38.176812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.118 [2024-11-20 16:22:38.176820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.176830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.176838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.176849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.176860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.176870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.176878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.176888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.176896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.176907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.176917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.176928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.176936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.176947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.176955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.176966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.176974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.176985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.176993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.177004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.177012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.177023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ad240 is same with the state(6) to be set 00:21:07.119 [2024-11-20 16:22:38.178191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.119 [2024-11-20 16:22:38.178615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.119 [2024-11-20 16:22:38.178626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.178983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.178994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.179003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.179014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.179022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.179033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.179042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.179052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.179061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.179073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.179082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.179092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.179101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.179112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.179120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.179131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.179140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.179151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.179160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.179170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.179178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.179189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.179198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.120 [2024-11-20 16:22:38.179214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.120 [2024-11-20 16:22:38.179222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.179234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.179243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.179254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.179262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.179273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.179282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.179292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.179301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.179311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.179321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.179331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.179340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.179351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.179359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.179370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.179379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.179389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.179398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.179408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.179417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.179427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.179436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.179448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.179457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.179468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac20 is same with the state(6) to be set 00:21:07.121 [2024-11-20 16:22:38.180576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.121 [2024-11-20 16:22:38.180927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.121 [2024-11-20 16:22:38.180935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.180941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.180951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.180957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.180967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.180974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.180984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.180992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.122 [2024-11-20 16:22:38.181426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.122 [2024-11-20 16:22:38.181435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.181442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.181451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.181458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.181467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.181474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.181484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.181492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.181500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.181507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.181516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.181522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.181531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.181538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.181546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.181554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.181562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.181569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.181578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.181585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.181593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.181600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.181609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.181616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.181624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03ea0 is same with the state(6) to be set 00:21:07.123 [2024-11-20 16:22:38.182573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:07.123 [2024-11-20 16:22:38.182592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:07.123 [2024-11-20 16:22:38.182603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:07.123 [2024-11-20 16:22:38.182614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:07.123 [2024-11-20 16:22:38.182652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:07.123 [2024-11-20 16:22:38.182660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:07.123 [2024-11-20 16:22:38.182669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:07.123 [2024-11-20 16:22:38.182680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:07.123 [2024-11-20 16:22:38.182727] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:07.123 [2024-11-20 16:22:38.182743] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:07.123 [2024-11-20 16:22:38.182817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:07.123 [2024-11-20 16:22:38.182830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:07.123 [2024-11-20 16:22:38.183121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.123 [2024-11-20 16:22:38.183138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f82c0 with addr=10.0.0.2, port=4420 00:21:07.123 [2024-11-20 16:22:38.183148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f82c0 is same with the state(6) to be set 00:21:07.123 [2024-11-20 16:22:38.183278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.123 [2024-11-20 16:22:38.183291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb250a0 with addr=10.0.0.2, port=4420 00:21:07.123 [2024-11-20 16:22:38.183298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb250a0 is same with the state(6) to be set 00:21:07.123 [2024-11-20 16:22:38.183509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.123 [2024-11-20 16:22:38.183520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb24860 with addr=10.0.0.2, port=4420 00:21:07.123 [2024-11-20 16:22:38.183528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24860 is same with the state(6) to be set 00:21:07.123 [2024-11-20 16:22:38.183606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.123 [2024-11-20 16:22:38.183618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60d610 with addr=10.0.0.2, port=4420 00:21:07.123 [2024-11-20 16:22:38.183626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60d610 is same with the state(6) to be set 00:21:07.123 [2024-11-20 16:22:38.184774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.184791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.184805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.184812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.184821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.184830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.123 [2024-11-20 16:22:38.184839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.123 [2024-11-20 16:22:38.184847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.184855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.184863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.184875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.184883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.184892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.184900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.184908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.184916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.184924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.184932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.184940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.184948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.184956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.184964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.184973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.184981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.184989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.184996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.124 [2024-11-20 16:22:38.185456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.124 [2024-11-20 16:22:38.185463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.125 [2024-11-20 16:22:38.185831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.125 [2024-11-20 16:22:38.185839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a485e0 is same with the state(6) to be set 00:21:07.125 [2024-11-20 16:22:38.187008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:07.125 [2024-11-20 16:22:38.187028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:07.125 [2024-11-20 16:22:38.187040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:07.125 task offset: 26240 on job bdev=Nvme1n1 fails 00:21:07.125 00:21:07.125 Latency(us) 00:21:07.125 [2024-11-20T15:22:38.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.125 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.125 Job: Nvme1n1 ended in about 0.89 seconds with error 00:21:07.125 Verification LBA range: start 0x0 length 0x400 00:21:07.125 Nvme1n1 : 0.89 215.81 13.49 71.94 0.00 220164.63 17101.78 221698.93 00:21:07.125 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.125 Job: Nvme2n1 ended in about 0.89 seconds with error 00:21:07.125 Verification LBA range: start 0x0 length 0x400 00:21:07.125 Nvme2n1 : 0.89 214.59 13.41 71.53 0.00 217489.07 16727.28 217704.35 00:21:07.125 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.125 Job: Nvme3n1 ended in about 0.90 seconds with error 00:21:07.125 Verification LBA range: start 0x0 length 0x400 00:21:07.125 Nvme3n1 : 0.90 212.85 13.30 70.95 0.00 215438.38 15291.73 215707.06 00:21:07.125 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.125 Job: Nvme4n1 ended in about 0.90 seconds with error 00:21:07.125 Verification LBA range: start 0x0 length 0x400 00:21:07.125 Nvme4n1 : 0.90 212.25 13.27 70.75 0.00 212198.52 13169.62 211712.49 00:21:07.125 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.125 Job: Nvme5n1 ended in about 0.91 seconds with error 00:21:07.125 Verification LBA range: start 0x0 length 0x400 00:21:07.125 Nvme5n1 : 0.91 216.09 13.51 70.56 0.00 205742.95 16103.13 212711.13 00:21:07.125 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.125 Job: Nvme6n1 ended in about 0.91 seconds with error 00:21:07.125 Verification LBA range: start 0x0 length 0x400 00:21:07.125 Nvme6n1 : 0.91 211.11 13.19 70.37 0.00 205639.44 16103.13 225693.50 00:21:07.126 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.126 Job: Nvme7n1 ended in about 0.91 seconds with error 00:21:07.126 Verification LBA range: start 0x0 length 0x400 00:21:07.126 Nvme7n1 : 0.91 214.94 13.43 70.18 0.00 199225.12 8176.40 214708.42 00:21:07.126 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.126 Job: Nvme8n1 ended in about 0.92 seconds with error 00:21:07.126 Verification LBA range: start 0x0 length 0x400 00:21:07.126 Nvme8n1 : 0.92 209.10 13.07 69.70 0.00 200022.80 17850.76 214708.42 00:21:07.126 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.126 Job: Nvme9n1 ended in about 0.89 seconds with error 00:21:07.126 Verification LBA range: start 0x0 length 0x400 00:21:07.126 Nvme9n1 : 0.89 215.04 13.44 71.68 0.00 189780.60 18599.74 218702.99 00:21:07.126 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.126 Job: Nvme10n1 ended in about 0.91 seconds with error 00:21:07.126 Verification LBA range: start 0x0 length 0x400 00:21:07.126 Nvme10n1 : 0.91 140.04 8.75 70.02 0.00 255143.42 17351.44 241671.80 00:21:07.126 [2024-11-20T15:22:38.360Z] =================================================================================================================== 00:21:07.126 [2024-11-20T15:22:38.360Z] Total : 2061.84 128.86 707.69 0.00 210953.27 8176.40 241671.80 00:21:07.126 [2024-11-20 16:22:38.220058] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:07.126 [2024-11-20 16:22:38.220106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:07.126 [2024-11-20 16:22:38.220443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.126 [2024-11-20 16:22:38.220464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4b550 with addr=10.0.0.2, port=4420 00:21:07.126 [2024-11-20 16:22:38.220476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4b550 is same with the state(6) to be set 00:21:07.126 [2024-11-20 16:22:38.220601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.126 [2024-11-20 16:22:38.220614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb56370 with addr=10.0.0.2, port=4420 00:21:07.126 [2024-11-20 16:22:38.220623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb56370 is same with the state(6) to be set 00:21:07.126 [2024-11-20 16:22:38.220639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f82c0 (9): Bad file descriptor 00:21:07.126 [2024-11-20 16:22:38.220653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb250a0 (9): Bad file descriptor 00:21:07.126 [2024-11-20 16:22:38.220663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb24860 (9): Bad file descriptor 00:21:07.126 [2024-11-20 16:22:38.220679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60d610 (9): Bad file descriptor 00:21:07.126 [2024-11-20 16:22:38.220985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.126 [2024-11-20 16:22:38.221002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f91b0 with addr=10.0.0.2, port=4420 00:21:07.126 [2024-11-20 16:22:38.221012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f91b0 is same with the state(6) to be set 00:21:07.126 [2024-11-20 16:22:38.221139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.126 [2024-11-20 16:22:38.221152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ed1e0 with addr=10.0.0.2, port=4420 00:21:07.126 [2024-11-20 16:22:38.221160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ed1e0 is same with the state(6) to be set 00:21:07.126 [2024-11-20 16:22:38.221312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.126 [2024-11-20 16:22:38.221325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb57200 with addr=10.0.0.2, port=4420 00:21:07.126 [2024-11-20 16:22:38.221334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57200 is same with the state(6) to be set 00:21:07.126 [2024-11-20 16:22:38.221488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.126 [2024-11-20 16:22:38.221501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb19e20 with addr=10.0.0.2, port=4420 00:21:07.126 [2024-11-20 16:22:38.221509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19e20 is same with the state(6) to be set 00:21:07.126 [2024-11-20 16:22:38.221519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4b550 (9): Bad file descriptor 00:21:07.126 [2024-11-20 16:22:38.221529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb56370 (9): Bad file descriptor 00:21:07.126 [2024-11-20 16:22:38.221539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:07.126 [2024-11-20 16:22:38.221546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:07.126 [2024-11-20 16:22:38.221554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:07.126 [2024-11-20 16:22:38.221564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:07.126 [2024-11-20 16:22:38.221574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:07.126 [2024-11-20 16:22:38.221581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:07.126 [2024-11-20 16:22:38.221589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:07.126 [2024-11-20 16:22:38.221595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:07.126 [2024-11-20 16:22:38.221603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:07.126 [2024-11-20 16:22:38.221610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:07.126 [2024-11-20 16:22:38.221616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:07.126 [2024-11-20 16:22:38.221624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:07.126 [2024-11-20 16:22:38.221631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:07.126 [2024-11-20 16:22:38.221638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:07.126 [2024-11-20 16:22:38.221648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:07.126 [2024-11-20 16:22:38.221655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:07.126 [2024-11-20 16:22:38.221702] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:07.126 [2024-11-20 16:22:38.221714] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:07.126 [2024-11-20 16:22:38.222049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f91b0 (9): Bad file descriptor 00:21:07.126 [2024-11-20 16:22:38.222065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ed1e0 (9): Bad file descriptor 00:21:07.126 [2024-11-20 16:22:38.222076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb57200 (9): Bad file descriptor 00:21:07.126 [2024-11-20 16:22:38.222085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb19e20 (9): Bad file descriptor 00:21:07.126 [2024-11-20 16:22:38.222094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:07.126 [2024-11-20 16:22:38.222101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:07.126 [2024-11-20 16:22:38.222108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:07.126 [2024-11-20 16:22:38.222116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:07.126 [2024-11-20 16:22:38.222124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:07.126 [2024-11-20 16:22:38.222130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:07.126 [2024-11-20 16:22:38.222138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:07.126 [2024-11-20 16:22:38.222144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:07.126 [2024-11-20 16:22:38.222179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:07.126 [2024-11-20 16:22:38.222192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:07.126 [2024-11-20 16:22:38.222211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:07.126 [2024-11-20 16:22:38.222221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:07.126 [2024-11-20 16:22:38.222250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:07.126 [2024-11-20 16:22:38.222258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:07.126 [2024-11-20 16:22:38.222265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:07.126 [2024-11-20 16:22:38.222271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:07.127 [2024-11-20 16:22:38.222280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:07.127 [2024-11-20 16:22:38.222286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:07.127 [2024-11-20 16:22:38.222293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:07.127 [2024-11-20 16:22:38.222300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:07.127 [2024-11-20 16:22:38.222311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:07.127 [2024-11-20 16:22:38.222317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:07.127 [2024-11-20 16:22:38.222325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:07.127 [2024-11-20 16:22:38.222331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:07.127 [2024-11-20 16:22:38.222338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:07.127 [2024-11-20 16:22:38.222345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:07.127 [2024-11-20 16:22:38.222352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:07.127 [2024-11-20 16:22:38.222359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:07.127 [2024-11-20 16:22:38.222588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.127 [2024-11-20 16:22:38.222603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60d610 with addr=10.0.0.2, port=4420 00:21:07.127 [2024-11-20 16:22:38.222611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60d610 is same with the state(6) to be set 00:21:07.127 [2024-11-20 16:22:38.222774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.127 [2024-11-20 16:22:38.222786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb24860 with addr=10.0.0.2, port=4420 00:21:07.127 [2024-11-20 16:22:38.222794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24860 is same with the state(6) to be set 00:21:07.127 [2024-11-20 16:22:38.223010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.127 [2024-11-20 16:22:38.223023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb250a0 with addr=10.0.0.2, port=4420 00:21:07.127 [2024-11-20 16:22:38.223031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb250a0 is same with the state(6) to be set 00:21:07.127 [2024-11-20 16:22:38.223177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.127 [2024-11-20 16:22:38.223190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f82c0 with addr=10.0.0.2, port=4420 00:21:07.127 [2024-11-20 16:22:38.223198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f82c0 is same with the state(6) to be set 00:21:07.127 [2024-11-20 16:22:38.223232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60d610 (9): Bad file descriptor 00:21:07.127 [2024-11-20 16:22:38.223245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb24860 (9): Bad file descriptor 00:21:07.127 [2024-11-20 16:22:38.223255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb250a0 (9): Bad file descriptor 00:21:07.127 [2024-11-20 16:22:38.223265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f82c0 (9): Bad file descriptor 00:21:07.127 [2024-11-20 16:22:38.223290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:07.127 [2024-11-20 16:22:38.223299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:07.127 [2024-11-20 16:22:38.223307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:07.127 [2024-11-20 16:22:38.223314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:07.127 [2024-11-20 16:22:38.223321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:07.127 [2024-11-20 16:22:38.223332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:07.127 [2024-11-20 16:22:38.223339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:07.127 [2024-11-20 16:22:38.223346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:07.127 [2024-11-20 16:22:38.223355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:07.127 [2024-11-20 16:22:38.223361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:07.127 [2024-11-20 16:22:38.223368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:07.127 [2024-11-20 16:22:38.223375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:07.127 [2024-11-20 16:22:38.223382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:07.127 [2024-11-20 16:22:38.223389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:07.127 [2024-11-20 16:22:38.223396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:07.127 [2024-11-20 16:22:38.223403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:07.387 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:08.327 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1979800 00:21:08.327 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:08.327 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1979800 00:21:08.327 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:08.327 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:08.327 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:08.327 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:08.327 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1979800 00:21:08.327 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:08.327 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:08.327 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:08.328 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:08.328 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:08.328 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:08.328 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:08.328 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:08.328 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:08.328 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:08.328 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:08.328 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:08.328 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.587 rmmod nvme_tcp 00:21:08.587 rmmod nvme_fabrics 00:21:08.587 rmmod nvme_keyring 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1979517 ']' 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1979517 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1979517 ']' 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1979517 00:21:08.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1979517) - No such process 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1979517 is not found' 00:21:08.587 Process with pid 1979517 is not found 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.587 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.500 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.500 00:21:10.500 real 0m8.138s 00:21:10.500 user 0m20.757s 00:21:10.500 sys 0m1.431s 00:21:10.500 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.500 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:10.500 ************************************ 00:21:10.500 END TEST nvmf_shutdown_tc3 00:21:10.500 ************************************ 00:21:10.500 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:10.500 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:10.500 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:10.500 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:10.500 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.500 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:10.760 ************************************ 00:21:10.760 START TEST nvmf_shutdown_tc4 00:21:10.760 ************************************ 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.760 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:10.761 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:10.761 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:10.761 Found net devices under 0000:86:00.0: cvl_0_0 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:10.761 Found net devices under 0000:86:00.1: cvl_0_1 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:10.761 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.762 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:11.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:21:11.022 00:21:11.022 --- 10.0.0.2 ping statistics --- 00:21:11.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.022 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:21:11.022 00:21:11.022 --- 10.0.0.1 ping statistics --- 00:21:11.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.022 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1981066 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1981066 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1981066 ']' 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.022 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.023 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.023 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.023 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:11.023 [2024-11-20 16:22:42.145786] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:21:11.023 [2024-11-20 16:22:42.145828] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.023 [2024-11-20 16:22:42.221348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:11.282 [2024-11-20 16:22:42.275721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.282 [2024-11-20 16:22:42.275765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.282 [2024-11-20 16:22:42.275777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.282 [2024-11-20 16:22:42.275786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.282 [2024-11-20 16:22:42.275794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.282 [2024-11-20 16:22:42.277969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.282 [2024-11-20 16:22:42.278076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.282 [2024-11-20 16:22:42.278186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:11.282 [2024-11-20 16:22:42.278188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:11.282 [2024-11-20 16:22:42.422611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:11.282 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.283 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:11.283 Malloc1 00:21:11.542 [2024-11-20 16:22:42.526717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.542 Malloc2 00:21:11.542 Malloc3 00:21:11.542 Malloc4 00:21:11.542 Malloc5 00:21:11.542 Malloc6 00:21:11.542 Malloc7 00:21:11.801 Malloc8 00:21:11.801 Malloc9 00:21:11.801 Malloc10 00:21:11.801 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.801 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:11.801 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.801 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:11.801 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1981239 00:21:11.801 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:11.801 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:12.060 [2024-11-20 16:22:43.036806] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:17.343 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:17.343 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1981066 00:21:17.343 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1981066 ']' 00:21:17.343 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1981066 00:21:17.343 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:17.343 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.343 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1981066 00:21:17.343 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:17.343 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:17.343 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1981066' 00:21:17.343 killing process with pid 1981066 00:21:17.343 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1981066 00:21:17.343 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1981066 00:21:17.344 [2024-11-20 16:22:48.035801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2093c70 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.035860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2093c70 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.035868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2093c70 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.035875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2093c70 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.035882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2093c70 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.035889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2093c70 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.036536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094140 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.036572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094140 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.036587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094140 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.036593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094140 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.036600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094140 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.036606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094140 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.037721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094610 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.037752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094610 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.037760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094610 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.037767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094610 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.037774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094610 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.037781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094610 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.037786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094610 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.037793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094610 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.038779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20937a0 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.038807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20937a0 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.038815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20937a0 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.038822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20937a0 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.038829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20937a0 is same with the state(6) to be set 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 [2024-11-20 16:22:48.040787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2020 is same with starting I/O failed: -6 00:21:17.344 the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.040812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2020 is same with Write completed with error (sct=0, sc=8) 00:21:17.344 the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.040821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2020 is same with the state(6) to be set 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 [2024-11-20 16:22:48.040827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2020 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.040835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2020 is same with the state(6) to be set 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 [2024-11-20 16:22:48.040846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2020 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.040853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2020 is same with the state(6) to be set 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 [2024-11-20 16:22:48.040860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2020 is same with starting I/O failed: -6 00:21:17.344 the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.040867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2020 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.040873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2020 is same with the state(6) to be set 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 [2024-11-20 16:22:48.040879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2020 is same with the state(6) to be set 00:21:17.344 [2024-11-20 16:22:48.040886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2020 is same with the state(6) to be set 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 [2024-11-20 16:22:48.041179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 starting I/O failed: -6 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.344 Write completed with error (sct=0, sc=8) 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 [2024-11-20 16:22:48.042133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 [2024-11-20 16:22:48.042322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080000 is same with the state(6) to be set 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 [2024-11-20 16:22:48.042345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080000 is same with the state(6) to be set 00:21:17.345 [2024-11-20 16:22:48.042353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080000 is same with the state(6) to be set 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 [2024-11-20 16:22:48.042360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080000 is same with the state(6) to be set 00:21:17.345 starting I/O failed: -6 00:21:17.345 [2024-11-20 16:22:48.042367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080000 is same with the state(6) to be set 00:21:17.345 [2024-11-20 16:22:48.042373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080000 is same with the state(6) to be set 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 [2024-11-20 16:22:48.042379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080000 is same with the state(6) to be set 00:21:17.345 [2024-11-20 16:22:48.042386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080000 is same with the state(6) to be set 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 [2024-11-20 16:22:48.043115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 starting I/O failed: -6 00:21:17.345 [2024-11-20 16:22:48.043973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9f6b0 is same with Write completed with error (sct=0, sc=8) 00:21:17.345 the state(6) to be set 00:21:17.345 starting I/O failed: -6 00:21:17.345 [2024-11-20 16:22:48.043998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9f6b0 is same with the state(6) to be set 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 [2024-11-20 16:22:48.044006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9f6b0 is same with the state(6) to be set 00:21:17.345 starting I/O failed: -6 00:21:17.345 [2024-11-20 16:22:48.044014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9f6b0 is same with the state(6) to be set 00:21:17.345 Write completed with error (sct=0, sc=8) 00:21:17.345 [2024-11-20 16:22:48.044024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9f6b0 is same with the state(6) to be set 00:21:17.345 starting I/O failed: -6 00:21:17.345 [2024-11-20 16:22:48.044031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9f6b0 is same with the state(6) to be set 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 [2024-11-20 16:22:48.044313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9fba0 is same with the state(6) to be set 00:21:17.346 [2024-11-20 16:22:48.044328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9fba0 is same with the state(6) to be set 00:21:17.346 [2024-11-20 16:22:48.044334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9fba0 is same with the state(6) to be set 00:21:17.346 [2024-11-20 16:22:48.044341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9fba0 is same with the state(6) to be set 00:21:17.346 [2024-11-20 16:22:48.044347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9fba0 is same with the state(6) to be set 00:21:17.346 [2024-11-20 16:22:48.044353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9fba0 is same with the state(6) to be set 00:21:17.346 [2024-11-20 16:22:48.044360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9fba0 is same with the state(6) to be set 00:21:17.346 [2024-11-20 16:22:48.044366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9fba0 is same with the state(6) to be set 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 [2024-11-20 16:22:48.044547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:17.346 NVMe io qpair process completion error 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 [2024-11-20 16:22:48.045478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 [2024-11-20 16:22:48.045923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea12e0 is same with the state(6) to be set 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 [2024-11-20 16:22:48.045945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea12e0 is same with the state(6) to be set 00:21:17.346 [2024-11-20 16:22:48.045953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea12e0 is same with the state(6) to be set 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 [2024-11-20 16:22:48.045960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea12e0 is same with the state(6) to be set 00:21:17.346 [2024-11-20 16:22:48.045968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea12e0 is same with the state(6) to be set 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 [2024-11-20 16:22:48.045975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea12e0 is same with the state(6) to be set 00:21:17.346 [2024-11-20 16:22:48.045982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea12e0 is same with the state(6) to be set 00:21:17.346 [2024-11-20 16:22:48.045988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea12e0 is same with Write completed with error (sct=0, sc=8) 00:21:17.346 the state(6) to be set 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 [2024-11-20 16:22:48.046381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.346 Write completed with error (sct=0, sc=8) 00:21:17.346 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 [2024-11-20 16:22:48.047568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 [2024-11-20 16:22:48.049127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:17.347 NVMe io qpair process completion error 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 Write completed with error (sct=0, sc=8) 00:21:17.347 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 [2024-11-20 16:22:48.050072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 [2024-11-20 16:22:48.050954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:17.348 starting I/O failed: -6 00:21:17.348 starting I/O failed: -6 00:21:17.348 starting I/O failed: -6 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.348 starting I/O failed: -6 00:21:17.348 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 [2024-11-20 16:22:48.052116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 [2024-11-20 16:22:48.054189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:17.349 NVMe io qpair process completion error 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 starting I/O failed: -6 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.349 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 [2024-11-20 16:22:48.055192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 [2024-11-20 16:22:48.056073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 [2024-11-20 16:22:48.057110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.350 starting I/O failed: -6 00:21:17.350 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 [2024-11-20 16:22:48.059251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:17.351 NVMe io qpair process completion error 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 [2024-11-20 16:22:48.060288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 Write completed with error (sct=0, sc=8) 00:21:17.351 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 [2024-11-20 16:22:48.061185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 [2024-11-20 16:22:48.062220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.352 Write completed with error (sct=0, sc=8) 00:21:17.352 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 [2024-11-20 16:22:48.066058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.353 NVMe io qpair process completion error 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 [2024-11-20 16:22:48.067001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 [2024-11-20 16:22:48.067894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.353 starting I/O failed: -6 00:21:17.353 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 [2024-11-20 16:22:48.068898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.354 starting I/O failed: -6 00:21:17.354 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 [2024-11-20 16:22:48.071473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:17.355 NVMe io qpair process completion error 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 [2024-11-20 16:22:48.072494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 [2024-11-20 16:22:48.073394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.355 Write completed with error (sct=0, sc=8) 00:21:17.355 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 [2024-11-20 16:22:48.074402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 [2024-11-20 16:22:48.075998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:17.356 NVMe io qpair process completion error 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 starting I/O failed: -6 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.356 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 [2024-11-20 16:22:48.077015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 [2024-11-20 16:22:48.077880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.357 Write completed with error (sct=0, sc=8) 00:21:17.357 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 [2024-11-20 16:22:48.078907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 [2024-11-20 16:22:48.082385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:17.358 NVMe io qpair process completion error 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 [2024-11-20 16:22:48.083380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.358 Write completed with error (sct=0, sc=8) 00:21:17.358 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 [2024-11-20 16:22:48.084299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 [2024-11-20 16:22:48.085292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.359 Write completed with error (sct=0, sc=8) 00:21:17.359 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 [2024-11-20 16:22:48.090966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:17.360 NVMe io qpair process completion error 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.360 starting I/O failed: -6 00:21:17.360 Write completed with error (sct=0, sc=8) 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Write completed with error (sct=0, sc=8) 00:21:17.361 starting I/O failed: -6 00:21:17.361 Initializing NVMe Controllers 00:21:17.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:17.361 Controller IO queue size 128, less than required. 00:21:17.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:17.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:17.361 Controller IO queue size 128, less than required. 00:21:17.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:17.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:17.361 Controller IO queue size 128, less than required. 00:21:17.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:17.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:17.361 Controller IO queue size 128, less than required. 00:21:17.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:17.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:17.361 Controller IO queue size 128, less than required. 00:21:17.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:17.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:17.361 Controller IO queue size 128, less than required. 00:21:17.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:17.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:17.361 Controller IO queue size 128, less than required. 00:21:17.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:17.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:17.361 Controller IO queue size 128, less than required. 00:21:17.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:17.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:17.362 Controller IO queue size 128, less than required. 00:21:17.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:17.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:17.362 Controller IO queue size 128, less than required. 00:21:17.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:17.362 Initialization complete. Launching workers. 00:21:17.362 ======================================================== 00:21:17.362 Latency(us) 00:21:17.362 Device Information : IOPS MiB/s Average min max 00:21:17.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2247.27 96.56 56963.01 853.27 96350.25 00:21:17.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2195.77 94.35 58309.84 713.34 110306.42 00:21:17.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2184.30 93.86 58652.20 697.18 114251.48 00:21:17.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2176.72 93.53 58909.98 783.61 122525.40 00:21:17.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2197.28 94.41 57681.71 703.17 101817.54 00:21:17.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2208.97 94.92 57385.82 826.21 100545.30 00:21:17.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2209.18 94.93 57392.34 686.84 99630.19 00:21:17.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2173.26 93.38 58358.97 686.73 98635.41 00:21:17.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2175.64 93.48 58312.53 890.15 100641.87 00:21:17.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2223.90 95.56 57085.23 893.76 105107.03 00:21:17.362 ======================================================== 00:21:17.362 Total : 21992.28 944.98 57899.17 686.73 122525.40 00:21:17.362 00:21:17.362 [2024-11-20 16:22:48.098514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cf890 is same with the state(6) to be set 00:21:17.362 [2024-11-20 16:22:48.098566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cfef0 is same with the state(6) to be set 00:21:17.362 [2024-11-20 16:22:48.098597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0410 is same with the state(6) to be set 00:21:17.362 [2024-11-20 16:22:48.098628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cf560 is same with the state(6) to be set 00:21:17.362 [2024-11-20 16:22:48.098656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1720 is same with the state(6) to be set 00:21:17.362 [2024-11-20 16:22:48.098683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1900 is same with the state(6) to be set 00:21:17.362 [2024-11-20 16:22:48.098711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cfbc0 is same with the state(6) to be set 00:21:17.362 [2024-11-20 16:22:48.098738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0a70 is same with the state(6) to be set 00:21:17.362 [2024-11-20 16:22:48.098764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0740 is same with the state(6) to be set 00:21:17.362 [2024-11-20 16:22:48.098793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1ae0 is same with the state(6) to be set 00:21:17.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:17.362 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1981239 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1981239 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1981239 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:18.300 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:18.301 rmmod nvme_tcp 00:21:18.301 rmmod nvme_fabrics 00:21:18.301 rmmod nvme_keyring 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1981066 ']' 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1981066 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1981066 ']' 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1981066 00:21:18.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1981066) - No such process 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1981066 is not found' 00:21:18.301 Process with pid 1981066 is not found 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.301 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:20.837 00:21:20.837 real 0m9.801s 00:21:20.837 user 0m25.005s 00:21:20.837 sys 0m5.149s 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:20.837 ************************************ 00:21:20.837 END TEST nvmf_shutdown_tc4 00:21:20.837 ************************************ 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:20.837 00:21:20.837 real 0m40.644s 00:21:20.837 user 1m39.132s 00:21:20.837 sys 0m14.055s 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:20.837 ************************************ 00:21:20.837 END TEST nvmf_shutdown 00:21:20.837 ************************************ 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:20.837 ************************************ 00:21:20.837 START TEST nvmf_nsid 00:21:20.837 ************************************ 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:20.837 * Looking for test storage... 00:21:20.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:20.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.837 --rc genhtml_branch_coverage=1 00:21:20.837 --rc genhtml_function_coverage=1 00:21:20.837 --rc genhtml_legend=1 00:21:20.837 --rc geninfo_all_blocks=1 00:21:20.837 --rc geninfo_unexecuted_blocks=1 00:21:20.837 00:21:20.837 ' 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:20.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.837 --rc genhtml_branch_coverage=1 00:21:20.837 --rc genhtml_function_coverage=1 00:21:20.837 --rc genhtml_legend=1 00:21:20.837 --rc geninfo_all_blocks=1 00:21:20.837 --rc geninfo_unexecuted_blocks=1 00:21:20.837 00:21:20.837 ' 00:21:20.837 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:20.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.838 --rc genhtml_branch_coverage=1 00:21:20.838 --rc genhtml_function_coverage=1 00:21:20.838 --rc genhtml_legend=1 00:21:20.838 --rc geninfo_all_blocks=1 00:21:20.838 --rc geninfo_unexecuted_blocks=1 00:21:20.838 00:21:20.838 ' 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:20.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.838 --rc genhtml_branch_coverage=1 00:21:20.838 --rc genhtml_function_coverage=1 00:21:20.838 --rc genhtml_legend=1 00:21:20.838 --rc geninfo_all_blocks=1 00:21:20.838 --rc geninfo_unexecuted_blocks=1 00:21:20.838 00:21:20.838 ' 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:20.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:20.838 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:27.410 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:27.410 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:27.410 Found net devices under 0000:86:00.0: cvl_0_0 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:27.410 Found net devices under 0000:86:00.1: cvl_0_1 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.410 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:27.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:21:27.411 00:21:27.411 --- 10.0.0.2 ping statistics --- 00:21:27.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.411 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:21:27.411 00:21:27.411 --- 10.0.0.1 ping statistics --- 00:21:27.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.411 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1985796 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1985796 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1985796 ']' 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.411 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:27.411 [2024-11-20 16:22:57.890488] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:21:27.411 [2024-11-20 16:22:57.890535] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.411 [2024-11-20 16:22:57.969726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.411 [2024-11-20 16:22:58.011281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.411 [2024-11-20 16:22:58.011315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.411 [2024-11-20 16:22:58.011322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.411 [2024-11-20 16:22:58.011328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.411 [2024-11-20 16:22:58.011333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.411 [2024-11-20 16:22:58.011892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1985816 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=afbba18a-f078-4a9d-a983-0f01dcbd8b5a 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=e15aabd1-e7ad-46f7-8962-ff1e9b9233e9 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=0c915c32-8c62-4a0e-8e45-709aa861c97f 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:27.411 null0 00:21:27.411 null1 00:21:27.411 [2024-11-20 16:22:58.204781] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:21:27.411 [2024-11-20 16:22:58.204824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1985816 ] 00:21:27.411 null2 00:21:27.411 [2024-11-20 16:22:58.212981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.411 [2024-11-20 16:22:58.237191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1985816 /var/tmp/tgt2.sock 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1985816 ']' 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:27.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:27.411 [2024-11-20 16:22:58.279071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.411 [2024-11-20 16:22:58.322402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:27.411 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:27.701 [2024-11-20 16:22:58.840090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.701 [2024-11-20 16:22:58.856207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:27.701 nvme0n1 nvme0n2 00:21:27.701 nvme1n1 00:21:27.982 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:27.982 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:27.982 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:28.942 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:28.942 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:28.942 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:28.942 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:28.942 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:28.942 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:28.942 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:28.942 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:28.942 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:28.942 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:28.942 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:28.942 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:28.942 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:29.879 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:29.879 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:29.879 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:29.879 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:29.879 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:29.879 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid afbba18a-f078-4a9d-a983-0f01dcbd8b5a 00:21:29.879 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:29.879 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:29.879 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:29.879 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:29.879 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:29.879 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=afbba18af0784a9da9830f01dcbd8b5a 00:21:29.879 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AFBBA18AF0784A9DA9830F01DCBD8B5A 00:21:29.879 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ AFBBA18AF0784A9DA9830F01DCBD8B5A == \A\F\B\B\A\1\8\A\F\0\7\8\4\A\9\D\A\9\8\3\0\F\0\1\D\C\B\D\8\B\5\A ]] 00:21:29.879 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:29.880 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:29.880 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:29.880 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:29.880 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:29.880 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:29.880 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:29.880 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid e15aabd1-e7ad-46f7-8962-ff1e9b9233e9 00:21:29.880 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:29.880 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:29.880 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:29.880 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:29.880 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e15aabd1e7ad46f78962ff1e9b9233e9 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E15AABD1E7AD46F78962FF1E9B9233E9 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ E15AABD1E7AD46F78962FF1E9B9233E9 == \E\1\5\A\A\B\D\1\E\7\A\D\4\6\F\7\8\9\6\2\F\F\1\E\9\B\9\2\3\3\E\9 ]] 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 0c915c32-8c62-4a0e-8e45-709aa861c97f 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0c915c328c624a0e8e45709aa861c97f 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0C915C328C624A0E8E45709AA861C97F 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 0C915C328C624A0E8E45709AA861C97F == \0\C\9\1\5\C\3\2\8\C\6\2\4\A\0\E\8\E\4\5\7\0\9\A\A\8\6\1\C\9\7\F ]] 00:21:30.141 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:30.401 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:30.401 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:30.401 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1985816 00:21:30.401 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1985816 ']' 00:21:30.401 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1985816 00:21:30.401 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:30.401 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.401 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1985816 00:21:30.401 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:30.401 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:30.401 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1985816' 00:21:30.401 killing process with pid 1985816 00:21:30.401 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1985816 00:21:30.401 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1985816 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.660 rmmod nvme_tcp 00:21:30.660 rmmod nvme_fabrics 00:21:30.660 rmmod nvme_keyring 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1985796 ']' 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1985796 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1985796 ']' 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1985796 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1985796 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1985796' 00:21:30.660 killing process with pid 1985796 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1985796 00:21:30.660 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1985796 00:21:30.920 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:30.920 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:30.920 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:30.920 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:30.920 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:30.920 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:30.920 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:30.920 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.920 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:30.920 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.920 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.920 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.458 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:33.458 00:21:33.458 real 0m12.422s 00:21:33.458 user 0m9.664s 00:21:33.458 sys 0m5.527s 00:21:33.458 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.458 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:33.458 ************************************ 00:21:33.458 END TEST nvmf_nsid 00:21:33.458 ************************************ 00:21:33.458 16:23:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:33.458 00:21:33.458 real 11m59.803s 00:21:33.458 user 25m47.903s 00:21:33.458 sys 3m39.703s 00:21:33.458 16:23:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.458 16:23:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:33.458 ************************************ 00:21:33.458 END TEST nvmf_target_extra 00:21:33.458 ************************************ 00:21:33.458 16:23:04 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:33.458 16:23:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:33.458 16:23:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.458 16:23:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:33.458 ************************************ 00:21:33.458 START TEST nvmf_host 00:21:33.458 ************************************ 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:33.458 * Looking for test storage... 00:21:33.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:33.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.458 --rc genhtml_branch_coverage=1 00:21:33.458 --rc genhtml_function_coverage=1 00:21:33.458 --rc genhtml_legend=1 00:21:33.458 --rc geninfo_all_blocks=1 00:21:33.458 --rc geninfo_unexecuted_blocks=1 00:21:33.458 00:21:33.458 ' 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:33.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.458 --rc genhtml_branch_coverage=1 00:21:33.458 --rc genhtml_function_coverage=1 00:21:33.458 --rc genhtml_legend=1 00:21:33.458 --rc geninfo_all_blocks=1 00:21:33.458 --rc geninfo_unexecuted_blocks=1 00:21:33.458 00:21:33.458 ' 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:33.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.458 --rc genhtml_branch_coverage=1 00:21:33.458 --rc genhtml_function_coverage=1 00:21:33.458 --rc genhtml_legend=1 00:21:33.458 --rc geninfo_all_blocks=1 00:21:33.458 --rc geninfo_unexecuted_blocks=1 00:21:33.458 00:21:33.458 ' 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:33.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.458 --rc genhtml_branch_coverage=1 00:21:33.458 --rc genhtml_function_coverage=1 00:21:33.458 --rc genhtml_legend=1 00:21:33.458 --rc geninfo_all_blocks=1 00:21:33.458 --rc geninfo_unexecuted_blocks=1 00:21:33.458 00:21:33.458 ' 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.458 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:33.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.459 ************************************ 00:21:33.459 START TEST nvmf_multicontroller 00:21:33.459 ************************************ 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:33.459 * Looking for test storage... 00:21:33.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:33.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.459 --rc genhtml_branch_coverage=1 00:21:33.459 --rc genhtml_function_coverage=1 00:21:33.459 --rc genhtml_legend=1 00:21:33.459 --rc geninfo_all_blocks=1 00:21:33.459 --rc geninfo_unexecuted_blocks=1 00:21:33.459 00:21:33.459 ' 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:33.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.459 --rc genhtml_branch_coverage=1 00:21:33.459 --rc genhtml_function_coverage=1 00:21:33.459 --rc genhtml_legend=1 00:21:33.459 --rc geninfo_all_blocks=1 00:21:33.459 --rc geninfo_unexecuted_blocks=1 00:21:33.459 00:21:33.459 ' 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:33.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.459 --rc genhtml_branch_coverage=1 00:21:33.459 --rc genhtml_function_coverage=1 00:21:33.459 --rc genhtml_legend=1 00:21:33.459 --rc geninfo_all_blocks=1 00:21:33.459 --rc geninfo_unexecuted_blocks=1 00:21:33.459 00:21:33.459 ' 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:33.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.459 --rc genhtml_branch_coverage=1 00:21:33.459 --rc genhtml_function_coverage=1 00:21:33.459 --rc genhtml_legend=1 00:21:33.459 --rc geninfo_all_blocks=1 00:21:33.459 --rc geninfo_unexecuted_blocks=1 00:21:33.459 00:21:33.459 ' 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.459 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:33.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:33.460 16:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:40.033 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:40.033 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.033 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:40.034 Found net devices under 0000:86:00.0: cvl_0_0 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:40.034 Found net devices under 0000:86:00.1: cvl_0_1 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:21:40.034 00:21:40.034 --- 10.0.0.2 ping statistics --- 00:21:40.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.034 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:21:40.034 00:21:40.034 --- 10.0.0.1 ping statistics --- 00:21:40.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.034 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1990133 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1990133 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1990133 ']' 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.034 [2024-11-20 16:23:10.703800] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:21:40.034 [2024-11-20 16:23:10.703850] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.034 [2024-11-20 16:23:10.785174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:40.034 [2024-11-20 16:23:10.827250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.034 [2024-11-20 16:23:10.827284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.034 [2024-11-20 16:23:10.827291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.034 [2024-11-20 16:23:10.827298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.034 [2024-11-20 16:23:10.827303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.034 [2024-11-20 16:23:10.828645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.034 [2024-11-20 16:23:10.828733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.034 [2024-11-20 16:23:10.828733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.034 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.035 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:40.035 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.035 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.035 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.035 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.035 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.035 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.035 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.035 [2024-11-20 16:23:10.973593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.035 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.035 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:40.035 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.035 16:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.035 Malloc0 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.035 [2024-11-20 16:23:11.037207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.035 [2024-11-20 16:23:11.045124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.035 Malloc1 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1990157 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1990157 /var/tmp/bdevperf.sock 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1990157 ']' 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.035 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.295 NVMe0n1 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.295 1 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:40.295 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.296 request: 00:21:40.296 { 00:21:40.296 "name": "NVMe0", 00:21:40.296 "trtype": "tcp", 00:21:40.296 "traddr": "10.0.0.2", 00:21:40.296 "adrfam": "ipv4", 00:21:40.296 "trsvcid": "4420", 00:21:40.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.296 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:40.296 "hostaddr": "10.0.0.1", 00:21:40.296 "prchk_reftag": false, 00:21:40.296 "prchk_guard": false, 00:21:40.296 "hdgst": false, 00:21:40.296 "ddgst": false, 00:21:40.296 "allow_unrecognized_csi": false, 00:21:40.296 "method": "bdev_nvme_attach_controller", 00:21:40.296 "req_id": 1 00:21:40.296 } 00:21:40.296 Got JSON-RPC error response 00:21:40.296 response: 00:21:40.296 { 00:21:40.296 "code": -114, 00:21:40.296 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:40.296 } 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.296 request: 00:21:40.296 { 00:21:40.296 "name": "NVMe0", 00:21:40.296 "trtype": "tcp", 00:21:40.296 "traddr": "10.0.0.2", 00:21:40.296 "adrfam": "ipv4", 00:21:40.296 "trsvcid": "4420", 00:21:40.296 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:40.296 "hostaddr": "10.0.0.1", 00:21:40.296 "prchk_reftag": false, 00:21:40.296 "prchk_guard": false, 00:21:40.296 "hdgst": false, 00:21:40.296 "ddgst": false, 00:21:40.296 "allow_unrecognized_csi": false, 00:21:40.296 "method": "bdev_nvme_attach_controller", 00:21:40.296 "req_id": 1 00:21:40.296 } 00:21:40.296 Got JSON-RPC error response 00:21:40.296 response: 00:21:40.296 { 00:21:40.296 "code": -114, 00:21:40.296 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:40.296 } 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.296 request: 00:21:40.296 { 00:21:40.296 "name": "NVMe0", 00:21:40.296 "trtype": "tcp", 00:21:40.296 "traddr": "10.0.0.2", 00:21:40.296 "adrfam": "ipv4", 00:21:40.296 "trsvcid": "4420", 00:21:40.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.296 "hostaddr": "10.0.0.1", 00:21:40.296 "prchk_reftag": false, 00:21:40.296 "prchk_guard": false, 00:21:40.296 "hdgst": false, 00:21:40.296 "ddgst": false, 00:21:40.296 "multipath": "disable", 00:21:40.296 "allow_unrecognized_csi": false, 00:21:40.296 "method": "bdev_nvme_attach_controller", 00:21:40.296 "req_id": 1 00:21:40.296 } 00:21:40.296 Got JSON-RPC error response 00:21:40.296 response: 00:21:40.296 { 00:21:40.296 "code": -114, 00:21:40.296 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:40.296 } 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.296 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.556 request: 00:21:40.556 { 00:21:40.556 "name": "NVMe0", 00:21:40.556 "trtype": "tcp", 00:21:40.556 "traddr": "10.0.0.2", 00:21:40.556 "adrfam": "ipv4", 00:21:40.556 "trsvcid": "4420", 00:21:40.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.556 "hostaddr": "10.0.0.1", 00:21:40.556 "prchk_reftag": false, 00:21:40.556 "prchk_guard": false, 00:21:40.556 "hdgst": false, 00:21:40.556 "ddgst": false, 00:21:40.556 "multipath": "failover", 00:21:40.556 "allow_unrecognized_csi": false, 00:21:40.556 "method": "bdev_nvme_attach_controller", 00:21:40.556 "req_id": 1 00:21:40.556 } 00:21:40.556 Got JSON-RPC error response 00:21:40.556 response: 00:21:40.556 { 00:21:40.556 "code": -114, 00:21:40.556 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:40.556 } 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.556 NVMe0n1 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.556 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.815 00:21:40.815 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.815 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.815 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:40.815 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.815 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.815 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.815 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:40.815 16:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:42.193 { 00:21:42.193 "results": [ 00:21:42.193 { 00:21:42.193 "job": "NVMe0n1", 00:21:42.193 "core_mask": "0x1", 00:21:42.193 "workload": "write", 00:21:42.193 "status": "finished", 00:21:42.193 "queue_depth": 128, 00:21:42.193 "io_size": 4096, 00:21:42.193 "runtime": 1.004221, 00:21:42.193 "iops": 25111.006441809124, 00:21:42.193 "mibps": 98.08986891331689, 00:21:42.193 "io_failed": 0, 00:21:42.193 "io_timeout": 0, 00:21:42.193 "avg_latency_us": 5090.9920713350975, 00:21:42.193 "min_latency_us": 3089.554285714286, 00:21:42.193 "max_latency_us": 11796.48 00:21:42.193 } 00:21:42.193 ], 00:21:42.193 "core_count": 1 00:21:42.193 } 00:21:42.193 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:42.193 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.193 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.193 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.193 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:42.193 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1990157 00:21:42.193 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1990157 ']' 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1990157 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1990157 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1990157' 00:21:42.194 killing process with pid 1990157 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1990157 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1990157 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:42.194 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:42.194 [2024-11-20 16:23:11.148676] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:21:42.194 [2024-11-20 16:23:11.148722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1990157 ] 00:21:42.194 [2024-11-20 16:23:11.223132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.194 [2024-11-20 16:23:11.264443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.194 [2024-11-20 16:23:11.890832] bdev.c:4906:bdev_name_add: *ERROR*: Bdev name d0e89f24-0434-4e6d-8ca3-dc72e3702c70 already exists 00:21:42.194 [2024-11-20 16:23:11.890862] bdev.c:8106:bdev_register: *ERROR*: Unable to add uuid:d0e89f24-0434-4e6d-8ca3-dc72e3702c70 alias for bdev NVMe1n1 00:21:42.194 [2024-11-20 16:23:11.890870] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:42.194 Running I/O for 1 seconds... 00:21:42.194 25089.00 IOPS, 98.00 MiB/s 00:21:42.194 Latency(us) 00:21:42.194 [2024-11-20T15:23:13.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.194 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:42.194 NVMe0n1 : 1.00 25111.01 98.09 0.00 0.00 5090.99 3089.55 11796.48 00:21:42.194 [2024-11-20T15:23:13.428Z] =================================================================================================================== 00:21:42.194 [2024-11-20T15:23:13.428Z] Total : 25111.01 98.09 0.00 0.00 5090.99 3089.55 11796.48 00:21:42.194 Received shutdown signal, test time was about 1.000000 seconds 00:21:42.194 00:21:42.194 Latency(us) 00:21:42.194 [2024-11-20T15:23:13.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.194 [2024-11-20T15:23:13.428Z] =================================================================================================================== 00:21:42.194 [2024-11-20T15:23:13.428Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.194 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:42.194 rmmod nvme_tcp 00:21:42.194 rmmod nvme_fabrics 00:21:42.194 rmmod nvme_keyring 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1990133 ']' 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1990133 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1990133 ']' 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1990133 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1990133 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1990133' 00:21:42.194 killing process with pid 1990133 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1990133 00:21:42.194 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1990133 00:21:42.454 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:42.454 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:42.454 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:42.454 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:42.454 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:42.454 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:42.454 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:42.454 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:42.454 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:42.454 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.454 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.454 16:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:44.991 00:21:44.991 real 0m11.238s 00:21:44.991 user 0m12.409s 00:21:44.991 sys 0m5.194s 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.991 ************************************ 00:21:44.991 END TEST nvmf_multicontroller 00:21:44.991 ************************************ 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.991 ************************************ 00:21:44.991 START TEST nvmf_aer 00:21:44.991 ************************************ 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:44.991 * Looking for test storage... 00:21:44.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:44.991 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:44.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.992 --rc genhtml_branch_coverage=1 00:21:44.992 --rc genhtml_function_coverage=1 00:21:44.992 --rc genhtml_legend=1 00:21:44.992 --rc geninfo_all_blocks=1 00:21:44.992 --rc geninfo_unexecuted_blocks=1 00:21:44.992 00:21:44.992 ' 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:44.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.992 --rc genhtml_branch_coverage=1 00:21:44.992 --rc genhtml_function_coverage=1 00:21:44.992 --rc genhtml_legend=1 00:21:44.992 --rc geninfo_all_blocks=1 00:21:44.992 --rc geninfo_unexecuted_blocks=1 00:21:44.992 00:21:44.992 ' 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:44.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.992 --rc genhtml_branch_coverage=1 00:21:44.992 --rc genhtml_function_coverage=1 00:21:44.992 --rc genhtml_legend=1 00:21:44.992 --rc geninfo_all_blocks=1 00:21:44.992 --rc geninfo_unexecuted_blocks=1 00:21:44.992 00:21:44.992 ' 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:44.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.992 --rc genhtml_branch_coverage=1 00:21:44.992 --rc genhtml_function_coverage=1 00:21:44.992 --rc genhtml_legend=1 00:21:44.992 --rc geninfo_all_blocks=1 00:21:44.992 --rc geninfo_unexecuted_blocks=1 00:21:44.992 00:21:44.992 ' 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:44.992 16:23:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:51.564 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:51.564 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:51.564 Found net devices under 0000:86:00.0: cvl_0_0 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:51.564 Found net devices under 0000:86:00.1: cvl_0_1 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:51.564 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:51.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:21:51.565 00:21:51.565 --- 10.0.0.2 ping statistics --- 00:21:51.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.565 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:21:51.565 00:21:51.565 --- 10.0.0.1 ping statistics --- 00:21:51.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.565 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1994071 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1994071 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1994071 ']' 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.565 16:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.565 [2024-11-20 16:23:21.975460] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:21:51.565 [2024-11-20 16:23:21.975502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.565 [2024-11-20 16:23:22.055072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.565 [2024-11-20 16:23:22.097846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.565 [2024-11-20 16:23:22.097883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.565 [2024-11-20 16:23:22.097892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.565 [2024-11-20 16:23:22.097898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.565 [2024-11-20 16:23:22.097903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.565 [2024-11-20 16:23:22.099488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.565 [2024-11-20 16:23:22.099599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.565 [2024-11-20 16:23:22.099697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.565 [2024-11-20 16:23:22.099697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.825 [2024-11-20 16:23:22.848145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.825 Malloc0 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.825 [2024-11-20 16:23:22.913418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.825 [ 00:21:51.825 { 00:21:51.825 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:51.825 "subtype": "Discovery", 00:21:51.825 "listen_addresses": [], 00:21:51.825 "allow_any_host": true, 00:21:51.825 "hosts": [] 00:21:51.825 }, 00:21:51.825 { 00:21:51.825 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.825 "subtype": "NVMe", 00:21:51.825 "listen_addresses": [ 00:21:51.825 { 00:21:51.825 "trtype": "TCP", 00:21:51.825 "adrfam": "IPv4", 00:21:51.825 "traddr": "10.0.0.2", 00:21:51.825 "trsvcid": "4420" 00:21:51.825 } 00:21:51.825 ], 00:21:51.825 "allow_any_host": true, 00:21:51.825 "hosts": [], 00:21:51.825 "serial_number": "SPDK00000000000001", 00:21:51.825 "model_number": "SPDK bdev Controller", 00:21:51.825 "max_namespaces": 2, 00:21:51.825 "min_cntlid": 1, 00:21:51.825 "max_cntlid": 65519, 00:21:51.825 "namespaces": [ 00:21:51.825 { 00:21:51.825 "nsid": 1, 00:21:51.825 "bdev_name": "Malloc0", 00:21:51.825 "name": "Malloc0", 00:21:51.825 "nguid": "F55F492710EC471B93040EBA3F64D4AF", 00:21:51.825 "uuid": "f55f4927-10ec-471b-9304-0eba3f64d4af" 00:21:51.825 } 00:21:51.825 ] 00:21:51.825 } 00:21:51.825 ] 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1994184 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:51.825 16:23:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:51.825 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:51.825 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:51.825 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:51.825 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.084 Malloc1 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.084 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.084 Asynchronous Event Request test 00:21:52.084 Attaching to 10.0.0.2 00:21:52.084 Attached to 10.0.0.2 00:21:52.084 Registering asynchronous event callbacks... 00:21:52.084 Starting namespace attribute notice tests for all controllers... 00:21:52.084 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:52.084 aer_cb - Changed Namespace 00:21:52.084 Cleaning up... 00:21:52.084 [ 00:21:52.084 { 00:21:52.084 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:52.084 "subtype": "Discovery", 00:21:52.084 "listen_addresses": [], 00:21:52.084 "allow_any_host": true, 00:21:52.084 "hosts": [] 00:21:52.084 }, 00:21:52.084 { 00:21:52.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.084 "subtype": "NVMe", 00:21:52.084 "listen_addresses": [ 00:21:52.084 { 00:21:52.084 "trtype": "TCP", 00:21:52.084 "adrfam": "IPv4", 00:21:52.084 "traddr": "10.0.0.2", 00:21:52.084 "trsvcid": "4420" 00:21:52.084 } 00:21:52.084 ], 00:21:52.084 "allow_any_host": true, 00:21:52.084 "hosts": [], 00:21:52.084 "serial_number": "SPDK00000000000001", 00:21:52.084 "model_number": "SPDK bdev Controller", 00:21:52.084 "max_namespaces": 2, 00:21:52.084 "min_cntlid": 1, 00:21:52.084 "max_cntlid": 65519, 00:21:52.343 "namespaces": [ 00:21:52.343 { 00:21:52.343 "nsid": 1, 00:21:52.343 "bdev_name": "Malloc0", 00:21:52.343 "name": "Malloc0", 00:21:52.343 "nguid": "F55F492710EC471B93040EBA3F64D4AF", 00:21:52.343 "uuid": "f55f4927-10ec-471b-9304-0eba3f64d4af" 00:21:52.343 }, 00:21:52.343 { 00:21:52.343 "nsid": 2, 00:21:52.343 "bdev_name": "Malloc1", 00:21:52.343 "name": "Malloc1", 00:21:52.343 "nguid": "A2C50745782543109AA42570A9E11BAF", 00:21:52.343 "uuid": "a2c50745-7825-4310-9aa4-2570a9e11baf" 00:21:52.343 } 00:21:52.343 ] 00:21:52.343 } 00:21:52.343 ] 00:21:52.343 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.343 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1994184 00:21:52.343 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:52.343 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.343 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.343 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.343 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:52.343 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.344 rmmod nvme_tcp 00:21:52.344 rmmod nvme_fabrics 00:21:52.344 rmmod nvme_keyring 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1994071 ']' 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1994071 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1994071 ']' 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1994071 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1994071 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1994071' 00:21:52.344 killing process with pid 1994071 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1994071 00:21:52.344 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1994071 00:21:52.603 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:52.603 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:52.603 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:52.603 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:52.603 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:52.603 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:52.603 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:52.603 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:52.603 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:52.603 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.603 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.603 16:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.510 16:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.510 00:21:54.510 real 0m9.966s 00:21:54.510 user 0m8.143s 00:21:54.510 sys 0m4.891s 00:21:54.510 16:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.510 16:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.510 ************************************ 00:21:54.510 END TEST nvmf_aer 00:21:54.510 ************************************ 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.770 ************************************ 00:21:54.770 START TEST nvmf_async_init 00:21:54.770 ************************************ 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:54.770 * Looking for test storage... 00:21:54.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:54.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.770 --rc genhtml_branch_coverage=1 00:21:54.770 --rc genhtml_function_coverage=1 00:21:54.770 --rc genhtml_legend=1 00:21:54.770 --rc geninfo_all_blocks=1 00:21:54.770 --rc geninfo_unexecuted_blocks=1 00:21:54.770 00:21:54.770 ' 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:54.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.770 --rc genhtml_branch_coverage=1 00:21:54.770 --rc genhtml_function_coverage=1 00:21:54.770 --rc genhtml_legend=1 00:21:54.770 --rc geninfo_all_blocks=1 00:21:54.770 --rc geninfo_unexecuted_blocks=1 00:21:54.770 00:21:54.770 ' 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:54.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.770 --rc genhtml_branch_coverage=1 00:21:54.770 --rc genhtml_function_coverage=1 00:21:54.770 --rc genhtml_legend=1 00:21:54.770 --rc geninfo_all_blocks=1 00:21:54.770 --rc geninfo_unexecuted_blocks=1 00:21:54.770 00:21:54.770 ' 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:54.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.770 --rc genhtml_branch_coverage=1 00:21:54.770 --rc genhtml_function_coverage=1 00:21:54.770 --rc genhtml_legend=1 00:21:54.770 --rc geninfo_all_blocks=1 00:21:54.770 --rc geninfo_unexecuted_blocks=1 00:21:54.770 00:21:54.770 ' 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.770 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:54.771 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:55.030 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:55.030 16:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8f96a156fdbe4b93bf291bb316aa3c67 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:55.030 16:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:01.602 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:01.602 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.602 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:01.602 Found net devices under 0000:86:00.0: cvl_0_0 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:01.603 Found net devices under 0000:86:00.1: cvl_0_1 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:22:01.603 00:22:01.603 --- 10.0.0.2 ping statistics --- 00:22:01.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.603 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:22:01.603 00:22:01.603 --- 10.0.0.1 ping statistics --- 00:22:01.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.603 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1997802 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1997802 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1997802 ']' 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.603 16:23:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.603 [2024-11-20 16:23:32.006478] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:22:01.603 [2024-11-20 16:23:32.006522] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.603 [2024-11-20 16:23:32.087860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.603 [2024-11-20 16:23:32.130358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.603 [2024-11-20 16:23:32.130406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.603 [2024-11-20 16:23:32.130414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.603 [2024-11-20 16:23:32.130420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.603 [2024-11-20 16:23:32.130428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.603 [2024-11-20 16:23:32.130948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.862 [2024-11-20 16:23:32.881862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.862 null0 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8f96a156fdbe4b93bf291bb316aa3c67 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.862 [2024-11-20 16:23:32.930120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.862 16:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.121 nvme0n1 00:22:02.121 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.121 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:02.121 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.121 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.121 [ 00:22:02.121 { 00:22:02.121 "name": "nvme0n1", 00:22:02.121 "aliases": [ 00:22:02.121 "8f96a156-fdbe-4b93-bf29-1bb316aa3c67" 00:22:02.121 ], 00:22:02.121 "product_name": "NVMe disk", 00:22:02.121 "block_size": 512, 00:22:02.121 "num_blocks": 2097152, 00:22:02.121 "uuid": "8f96a156-fdbe-4b93-bf29-1bb316aa3c67", 00:22:02.121 "numa_id": 1, 00:22:02.121 "assigned_rate_limits": { 00:22:02.121 "rw_ios_per_sec": 0, 00:22:02.121 "rw_mbytes_per_sec": 0, 00:22:02.121 "r_mbytes_per_sec": 0, 00:22:02.121 "w_mbytes_per_sec": 0 00:22:02.121 }, 00:22:02.121 "claimed": false, 00:22:02.121 "zoned": false, 00:22:02.121 "supported_io_types": { 00:22:02.121 "read": true, 00:22:02.121 "write": true, 00:22:02.121 "unmap": false, 00:22:02.121 "flush": true, 00:22:02.121 "reset": true, 00:22:02.121 "nvme_admin": true, 00:22:02.121 "nvme_io": true, 00:22:02.121 "nvme_io_md": false, 00:22:02.121 "write_zeroes": true, 00:22:02.121 "zcopy": false, 00:22:02.121 "get_zone_info": false, 00:22:02.121 "zone_management": false, 00:22:02.121 "zone_append": false, 00:22:02.121 "compare": true, 00:22:02.121 "compare_and_write": true, 00:22:02.121 "abort": true, 00:22:02.121 "seek_hole": false, 00:22:02.121 "seek_data": false, 00:22:02.121 "copy": true, 00:22:02.121 "nvme_iov_md": false 00:22:02.121 }, 00:22:02.121 "memory_domains": [ 00:22:02.121 { 00:22:02.121 "dma_device_id": "system", 00:22:02.121 "dma_device_type": 1 00:22:02.121 } 00:22:02.121 ], 00:22:02.121 "driver_specific": { 00:22:02.121 "nvme": [ 00:22:02.121 { 00:22:02.121 "trid": { 00:22:02.121 "trtype": "TCP", 00:22:02.121 "adrfam": "IPv4", 00:22:02.121 "traddr": "10.0.0.2", 00:22:02.121 "trsvcid": "4420", 00:22:02.121 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:02.121 }, 00:22:02.121 "ctrlr_data": { 00:22:02.121 "cntlid": 1, 00:22:02.121 "vendor_id": "0x8086", 00:22:02.121 "model_number": "SPDK bdev Controller", 00:22:02.121 "serial_number": "00000000000000000000", 00:22:02.121 "firmware_revision": "25.01", 00:22:02.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.121 "oacs": { 00:22:02.121 "security": 0, 00:22:02.121 "format": 0, 00:22:02.121 "firmware": 0, 00:22:02.121 "ns_manage": 0 00:22:02.121 }, 00:22:02.121 "multi_ctrlr": true, 00:22:02.121 "ana_reporting": false 00:22:02.121 }, 00:22:02.121 "vs": { 00:22:02.121 "nvme_version": "1.3" 00:22:02.121 }, 00:22:02.121 "ns_data": { 00:22:02.121 "id": 1, 00:22:02.121 "can_share": true 00:22:02.121 } 00:22:02.121 } 00:22:02.121 ], 00:22:02.121 "mp_policy": "active_passive" 00:22:02.121 } 00:22:02.121 } 00:22:02.121 ] 00:22:02.121 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.121 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:02.121 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.121 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.121 [2024-11-20 16:23:33.194708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:02.121 [2024-11-20 16:23:33.194767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce7900 (9): Bad file descriptor 00:22:02.122 [2024-11-20 16:23:33.326287] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:02.122 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.122 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:02.122 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.122 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.122 [ 00:22:02.122 { 00:22:02.122 "name": "nvme0n1", 00:22:02.122 "aliases": [ 00:22:02.122 "8f96a156-fdbe-4b93-bf29-1bb316aa3c67" 00:22:02.122 ], 00:22:02.122 "product_name": "NVMe disk", 00:22:02.122 "block_size": 512, 00:22:02.122 "num_blocks": 2097152, 00:22:02.122 "uuid": "8f96a156-fdbe-4b93-bf29-1bb316aa3c67", 00:22:02.122 "numa_id": 1, 00:22:02.122 "assigned_rate_limits": { 00:22:02.122 "rw_ios_per_sec": 0, 00:22:02.122 "rw_mbytes_per_sec": 0, 00:22:02.122 "r_mbytes_per_sec": 0, 00:22:02.122 "w_mbytes_per_sec": 0 00:22:02.122 }, 00:22:02.122 "claimed": false, 00:22:02.122 "zoned": false, 00:22:02.122 "supported_io_types": { 00:22:02.122 "read": true, 00:22:02.122 "write": true, 00:22:02.122 "unmap": false, 00:22:02.122 "flush": true, 00:22:02.122 "reset": true, 00:22:02.122 "nvme_admin": true, 00:22:02.122 "nvme_io": true, 00:22:02.122 "nvme_io_md": false, 00:22:02.122 "write_zeroes": true, 00:22:02.122 "zcopy": false, 00:22:02.122 "get_zone_info": false, 00:22:02.122 "zone_management": false, 00:22:02.122 "zone_append": false, 00:22:02.122 "compare": true, 00:22:02.122 "compare_and_write": true, 00:22:02.122 "abort": true, 00:22:02.122 "seek_hole": false, 00:22:02.122 "seek_data": false, 00:22:02.122 "copy": true, 00:22:02.122 "nvme_iov_md": false 00:22:02.122 }, 00:22:02.122 "memory_domains": [ 00:22:02.122 { 00:22:02.122 "dma_device_id": "system", 00:22:02.122 "dma_device_type": 1 00:22:02.122 } 00:22:02.122 ], 00:22:02.122 "driver_specific": { 00:22:02.122 "nvme": [ 00:22:02.122 { 00:22:02.122 "trid": { 00:22:02.122 "trtype": "TCP", 00:22:02.122 "adrfam": "IPv4", 00:22:02.122 "traddr": "10.0.0.2", 00:22:02.122 "trsvcid": "4420", 00:22:02.122 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:02.122 }, 00:22:02.122 "ctrlr_data": { 00:22:02.122 "cntlid": 2, 00:22:02.122 "vendor_id": "0x8086", 00:22:02.122 "model_number": "SPDK bdev Controller", 00:22:02.122 "serial_number": "00000000000000000000", 00:22:02.122 "firmware_revision": "25.01", 00:22:02.122 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.122 "oacs": { 00:22:02.122 "security": 0, 00:22:02.122 "format": 0, 00:22:02.122 "firmware": 0, 00:22:02.122 "ns_manage": 0 00:22:02.122 }, 00:22:02.122 "multi_ctrlr": true, 00:22:02.122 "ana_reporting": false 00:22:02.122 }, 00:22:02.122 "vs": { 00:22:02.122 "nvme_version": "1.3" 00:22:02.122 }, 00:22:02.122 "ns_data": { 00:22:02.122 "id": 1, 00:22:02.122 "can_share": true 00:22:02.122 } 00:22:02.122 } 00:22:02.122 ], 00:22:02.122 "mp_policy": "active_passive" 00:22:02.122 } 00:22:02.122 } 00:22:02.122 ] 00:22:02.122 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.122 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.122 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.122 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.mqJLrUn3EI 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.mqJLrUn3EI 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.mqJLrUn3EI 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.382 [2024-11-20 16:23:33.399324] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.382 [2024-11-20 16:23:33.399433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.382 [2024-11-20 16:23:33.419401] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.382 nvme0n1 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.382 [ 00:22:02.382 { 00:22:02.382 "name": "nvme0n1", 00:22:02.382 "aliases": [ 00:22:02.382 "8f96a156-fdbe-4b93-bf29-1bb316aa3c67" 00:22:02.382 ], 00:22:02.382 "product_name": "NVMe disk", 00:22:02.382 "block_size": 512, 00:22:02.382 "num_blocks": 2097152, 00:22:02.382 "uuid": "8f96a156-fdbe-4b93-bf29-1bb316aa3c67", 00:22:02.382 "numa_id": 1, 00:22:02.382 "assigned_rate_limits": { 00:22:02.382 "rw_ios_per_sec": 0, 00:22:02.382 "rw_mbytes_per_sec": 0, 00:22:02.382 "r_mbytes_per_sec": 0, 00:22:02.382 "w_mbytes_per_sec": 0 00:22:02.382 }, 00:22:02.382 "claimed": false, 00:22:02.382 "zoned": false, 00:22:02.382 "supported_io_types": { 00:22:02.382 "read": true, 00:22:02.382 "write": true, 00:22:02.382 "unmap": false, 00:22:02.382 "flush": true, 00:22:02.382 "reset": true, 00:22:02.382 "nvme_admin": true, 00:22:02.382 "nvme_io": true, 00:22:02.382 "nvme_io_md": false, 00:22:02.382 "write_zeroes": true, 00:22:02.382 "zcopy": false, 00:22:02.382 "get_zone_info": false, 00:22:02.382 "zone_management": false, 00:22:02.382 "zone_append": false, 00:22:02.382 "compare": true, 00:22:02.382 "compare_and_write": true, 00:22:02.382 "abort": true, 00:22:02.382 "seek_hole": false, 00:22:02.382 "seek_data": false, 00:22:02.382 "copy": true, 00:22:02.382 "nvme_iov_md": false 00:22:02.382 }, 00:22:02.382 "memory_domains": [ 00:22:02.382 { 00:22:02.382 "dma_device_id": "system", 00:22:02.382 "dma_device_type": 1 00:22:02.382 } 00:22:02.382 ], 00:22:02.382 "driver_specific": { 00:22:02.382 "nvme": [ 00:22:02.382 { 00:22:02.382 "trid": { 00:22:02.382 "trtype": "TCP", 00:22:02.382 "adrfam": "IPv4", 00:22:02.382 "traddr": "10.0.0.2", 00:22:02.382 "trsvcid": "4421", 00:22:02.382 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:02.382 }, 00:22:02.382 "ctrlr_data": { 00:22:02.382 "cntlid": 3, 00:22:02.382 "vendor_id": "0x8086", 00:22:02.382 "model_number": "SPDK bdev Controller", 00:22:02.382 "serial_number": "00000000000000000000", 00:22:02.382 "firmware_revision": "25.01", 00:22:02.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.382 "oacs": { 00:22:02.382 "security": 0, 00:22:02.382 "format": 0, 00:22:02.382 "firmware": 0, 00:22:02.382 "ns_manage": 0 00:22:02.382 }, 00:22:02.382 "multi_ctrlr": true, 00:22:02.382 "ana_reporting": false 00:22:02.382 }, 00:22:02.382 "vs": { 00:22:02.382 "nvme_version": "1.3" 00:22:02.382 }, 00:22:02.382 "ns_data": { 00:22:02.382 "id": 1, 00:22:02.382 "can_share": true 00:22:02.382 } 00:22:02.382 } 00:22:02.382 ], 00:22:02.382 "mp_policy": "active_passive" 00:22:02.382 } 00:22:02.382 } 00:22:02.382 ] 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.mqJLrUn3EI 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:02.382 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:02.383 rmmod nvme_tcp 00:22:02.383 rmmod nvme_fabrics 00:22:02.383 rmmod nvme_keyring 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1997802 ']' 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1997802 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1997802 ']' 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1997802 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.383 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1997802 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1997802' 00:22:02.642 killing process with pid 1997802 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1997802 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1997802 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.642 16:23:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.179 16:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:05.179 00:22:05.179 real 0m10.066s 00:22:05.179 user 0m3.822s 00:22:05.179 sys 0m4.850s 00:22:05.179 16:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.179 16:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.179 ************************************ 00:22:05.179 END TEST nvmf_async_init 00:22:05.179 ************************************ 00:22:05.179 16:23:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:05.179 16:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:05.179 16:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:05.179 16:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.179 ************************************ 00:22:05.179 START TEST dma 00:22:05.179 ************************************ 00:22:05.179 16:23:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:05.179 * Looking for test storage... 00:22:05.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:05.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.179 --rc genhtml_branch_coverage=1 00:22:05.179 --rc genhtml_function_coverage=1 00:22:05.179 --rc genhtml_legend=1 00:22:05.179 --rc geninfo_all_blocks=1 00:22:05.179 --rc geninfo_unexecuted_blocks=1 00:22:05.179 00:22:05.179 ' 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:05.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.179 --rc genhtml_branch_coverage=1 00:22:05.179 --rc genhtml_function_coverage=1 00:22:05.179 --rc genhtml_legend=1 00:22:05.179 --rc geninfo_all_blocks=1 00:22:05.179 --rc geninfo_unexecuted_blocks=1 00:22:05.179 00:22:05.179 ' 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:05.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.179 --rc genhtml_branch_coverage=1 00:22:05.179 --rc genhtml_function_coverage=1 00:22:05.179 --rc genhtml_legend=1 00:22:05.179 --rc geninfo_all_blocks=1 00:22:05.179 --rc geninfo_unexecuted_blocks=1 00:22:05.179 00:22:05.179 ' 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:05.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.179 --rc genhtml_branch_coverage=1 00:22:05.179 --rc genhtml_function_coverage=1 00:22:05.179 --rc genhtml_legend=1 00:22:05.179 --rc geninfo_all_blocks=1 00:22:05.179 --rc geninfo_unexecuted_blocks=1 00:22:05.179 00:22:05.179 ' 00:22:05.179 16:23:36 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:05.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:05.180 00:22:05.180 real 0m0.207s 00:22:05.180 user 0m0.126s 00:22:05.180 sys 0m0.094s 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:05.180 ************************************ 00:22:05.180 END TEST dma 00:22:05.180 ************************************ 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.180 ************************************ 00:22:05.180 START TEST nvmf_identify 00:22:05.180 ************************************ 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:05.180 * Looking for test storage... 00:22:05.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:05.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.180 --rc genhtml_branch_coverage=1 00:22:05.180 --rc genhtml_function_coverage=1 00:22:05.180 --rc genhtml_legend=1 00:22:05.180 --rc geninfo_all_blocks=1 00:22:05.180 --rc geninfo_unexecuted_blocks=1 00:22:05.180 00:22:05.180 ' 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:05.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.180 --rc genhtml_branch_coverage=1 00:22:05.180 --rc genhtml_function_coverage=1 00:22:05.180 --rc genhtml_legend=1 00:22:05.180 --rc geninfo_all_blocks=1 00:22:05.180 --rc geninfo_unexecuted_blocks=1 00:22:05.180 00:22:05.180 ' 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:05.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.180 --rc genhtml_branch_coverage=1 00:22:05.180 --rc genhtml_function_coverage=1 00:22:05.180 --rc genhtml_legend=1 00:22:05.180 --rc geninfo_all_blocks=1 00:22:05.180 --rc geninfo_unexecuted_blocks=1 00:22:05.180 00:22:05.180 ' 00:22:05.180 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:05.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.181 --rc genhtml_branch_coverage=1 00:22:05.181 --rc genhtml_function_coverage=1 00:22:05.181 --rc genhtml_legend=1 00:22:05.181 --rc geninfo_all_blocks=1 00:22:05.181 --rc geninfo_unexecuted_blocks=1 00:22:05.181 00:22:05.181 ' 00:22:05.181 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.181 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:05.181 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.181 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.181 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.181 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.181 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.181 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.181 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.181 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.181 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.181 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:05.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:05.441 16:23:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.015 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:12.016 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:12.016 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:12.016 Found net devices under 0000:86:00.0: cvl_0_0 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:12.016 Found net devices under 0000:86:00.1: cvl_0_1 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:12.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:22:12.016 00:22:12.016 --- 10.0.0.2 ping statistics --- 00:22:12.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.016 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:22:12.016 00:22:12.016 --- 10.0.0.1 ping statistics --- 00:22:12.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.016 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2001768 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2001768 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2001768 ']' 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.016 16:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.016 [2024-11-20 16:23:42.403295] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:22:12.016 [2024-11-20 16:23:42.403342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.017 [2024-11-20 16:23:42.483093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:12.017 [2024-11-20 16:23:42.526649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.017 [2024-11-20 16:23:42.526683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.017 [2024-11-20 16:23:42.526690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.017 [2024-11-20 16:23:42.526695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.017 [2024-11-20 16:23:42.526700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.017 [2024-11-20 16:23:42.528123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.017 [2024-11-20 16:23:42.528242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.017 [2024-11-20 16:23:42.528273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.017 [2024-11-20 16:23:42.528273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.017 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.017 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:12.017 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.017 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.017 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.277 [2024-11-20 16:23:43.248633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.277 Malloc0 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.277 [2024-11-20 16:23:43.346903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.277 [ 00:22:12.277 { 00:22:12.277 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:12.277 "subtype": "Discovery", 00:22:12.277 "listen_addresses": [ 00:22:12.277 { 00:22:12.277 "trtype": "TCP", 00:22:12.277 "adrfam": "IPv4", 00:22:12.277 "traddr": "10.0.0.2", 00:22:12.277 "trsvcid": "4420" 00:22:12.277 } 00:22:12.277 ], 00:22:12.277 "allow_any_host": true, 00:22:12.277 "hosts": [] 00:22:12.277 }, 00:22:12.277 { 00:22:12.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.277 "subtype": "NVMe", 00:22:12.277 "listen_addresses": [ 00:22:12.277 { 00:22:12.277 "trtype": "TCP", 00:22:12.277 "adrfam": "IPv4", 00:22:12.277 "traddr": "10.0.0.2", 00:22:12.277 "trsvcid": "4420" 00:22:12.277 } 00:22:12.277 ], 00:22:12.277 "allow_any_host": true, 00:22:12.277 "hosts": [], 00:22:12.277 "serial_number": "SPDK00000000000001", 00:22:12.277 "model_number": "SPDK bdev Controller", 00:22:12.277 "max_namespaces": 32, 00:22:12.277 "min_cntlid": 1, 00:22:12.277 "max_cntlid": 65519, 00:22:12.277 "namespaces": [ 00:22:12.277 { 00:22:12.277 "nsid": 1, 00:22:12.277 "bdev_name": "Malloc0", 00:22:12.277 "name": "Malloc0", 00:22:12.277 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:12.277 "eui64": "ABCDEF0123456789", 00:22:12.277 "uuid": "80fbc178-7ba2-4089-bc1a-094e7d0905c4" 00:22:12.277 } 00:22:12.277 ] 00:22:12.277 } 00:22:12.277 ] 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.277 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:12.277 [2024-11-20 16:23:43.400885] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:22:12.277 [2024-11-20 16:23:43.400930] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001856 ] 00:22:12.277 [2024-11-20 16:23:43.438923] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:12.277 [2024-11-20 16:23:43.438967] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:12.278 [2024-11-20 16:23:43.438975] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:12.278 [2024-11-20 16:23:43.438989] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:12.278 [2024-11-20 16:23:43.438999] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:12.278 [2024-11-20 16:23:43.446499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:12.278 [2024-11-20 16:23:43.446533] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x229b690 0 00:22:12.278 [2024-11-20 16:23:43.454213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:12.278 [2024-11-20 16:23:43.454227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:12.278 [2024-11-20 16:23:43.454232] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:12.278 [2024-11-20 16:23:43.454235] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:12.278 [2024-11-20 16:23:43.454266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.454271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.454275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b690) 00:22:12.278 [2024-11-20 16:23:43.454287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:12.278 [2024-11-20 16:23:43.454303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:22:12.278 [2024-11-20 16:23:43.461212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.278 [2024-11-20 16:23:43.461221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.278 [2024-11-20 16:23:43.461224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b690 00:22:12.278 [2024-11-20 16:23:43.461240] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:12.278 [2024-11-20 16:23:43.461246] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:12.278 [2024-11-20 16:23:43.461251] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:12.278 [2024-11-20 16:23:43.461263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b690) 00:22:12.278 [2024-11-20 16:23:43.461277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-11-20 16:23:43.461290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:22:12.278 [2024-11-20 16:23:43.461430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.278 [2024-11-20 16:23:43.461436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.278 [2024-11-20 16:23:43.461439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b690 00:22:12.278 [2024-11-20 16:23:43.461448] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:12.278 [2024-11-20 16:23:43.461454] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:12.278 [2024-11-20 16:23:43.461460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b690) 00:22:12.278 [2024-11-20 16:23:43.461475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-11-20 16:23:43.461485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:22:12.278 [2024-11-20 16:23:43.461548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.278 [2024-11-20 16:23:43.461553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.278 [2024-11-20 16:23:43.461556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b690 00:22:12.278 [2024-11-20 16:23:43.461565] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:12.278 [2024-11-20 16:23:43.461571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:12.278 [2024-11-20 16:23:43.461577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b690) 00:22:12.278 [2024-11-20 16:23:43.461589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-11-20 16:23:43.461598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:22:12.278 [2024-11-20 16:23:43.461665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.278 [2024-11-20 16:23:43.461670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.278 [2024-11-20 16:23:43.461674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b690 00:22:12.278 [2024-11-20 16:23:43.461681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:12.278 [2024-11-20 16:23:43.461689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b690) 00:22:12.278 [2024-11-20 16:23:43.461701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-11-20 16:23:43.461710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:22:12.278 [2024-11-20 16:23:43.461774] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.278 [2024-11-20 16:23:43.461780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.278 [2024-11-20 16:23:43.461783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b690 00:22:12.278 [2024-11-20 16:23:43.461790] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:12.278 [2024-11-20 16:23:43.461794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:12.278 [2024-11-20 16:23:43.461801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:12.278 [2024-11-20 16:23:43.461908] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:12.278 [2024-11-20 16:23:43.461913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:12.278 [2024-11-20 16:23:43.461922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.461928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b690) 00:22:12.278 [2024-11-20 16:23:43.461933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-11-20 16:23:43.461943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:22:12.278 [2024-11-20 16:23:43.462009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.278 [2024-11-20 16:23:43.462015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.278 [2024-11-20 16:23:43.462018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.462021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b690 00:22:12.278 [2024-11-20 16:23:43.462025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:12.278 [2024-11-20 16:23:43.462033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.462036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.462039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b690) 00:22:12.278 [2024-11-20 16:23:43.462045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-11-20 16:23:43.462054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:22:12.278 [2024-11-20 16:23:43.462117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.278 [2024-11-20 16:23:43.462123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.278 [2024-11-20 16:23:43.462126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.462130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b690 00:22:12.278 [2024-11-20 16:23:43.462133] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:12.278 [2024-11-20 16:23:43.462138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:12.278 [2024-11-20 16:23:43.462145] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:12.278 [2024-11-20 16:23:43.462154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:12.278 [2024-11-20 16:23:43.462162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.462165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b690) 00:22:12.278 [2024-11-20 16:23:43.462171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-11-20 16:23:43.462180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:22:12.278 [2024-11-20 16:23:43.462285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.278 [2024-11-20 16:23:43.462291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.278 [2024-11-20 16:23:43.462295] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.278 [2024-11-20 16:23:43.462298] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229b690): datao=0, datal=4096, cccid=0 00:22:12.279 [2024-11-20 16:23:43.462302] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fd100) on tqpair(0x229b690): expected_datao=0, payload_size=4096 00:22:12.279 [2024-11-20 16:23:43.462306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.279 [2024-11-20 16:23:43.462320] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.279 [2024-11-20 16:23:43.462325] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.543 [2024-11-20 16:23:43.507225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.543 [2024-11-20 16:23:43.507228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b690 00:22:12.543 [2024-11-20 16:23:43.507240] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:12.543 [2024-11-20 16:23:43.507245] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:12.543 [2024-11-20 16:23:43.507250] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:12.543 [2024-11-20 16:23:43.507258] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:12.543 [2024-11-20 16:23:43.507262] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:12.543 [2024-11-20 16:23:43.507266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:12.543 [2024-11-20 16:23:43.507277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:12.543 [2024-11-20 16:23:43.507284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b690) 00:22:12.543 [2024-11-20 16:23:43.507298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:12.543 [2024-11-20 16:23:43.507310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:22:12.543 [2024-11-20 16:23:43.507395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.543 [2024-11-20 16:23:43.507401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.543 [2024-11-20 16:23:43.507404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b690 00:22:12.543 [2024-11-20 16:23:43.507413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b690) 00:22:12.543 [2024-11-20 16:23:43.507425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.543 [2024-11-20 16:23:43.507430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x229b690) 00:22:12.543 [2024-11-20 16:23:43.507441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.543 [2024-11-20 16:23:43.507446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x229b690) 00:22:12.543 [2024-11-20 16:23:43.507457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.543 [2024-11-20 16:23:43.507465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507471] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b690) 00:22:12.543 [2024-11-20 16:23:43.507476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.543 [2024-11-20 16:23:43.507480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:12.543 [2024-11-20 16:23:43.507488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:12.543 [2024-11-20 16:23:43.507493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229b690) 00:22:12.543 [2024-11-20 16:23:43.507502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.543 [2024-11-20 16:23:43.507513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:22:12.543 [2024-11-20 16:23:43.507518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd280, cid 1, qid 0 00:22:12.543 [2024-11-20 16:23:43.507522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd400, cid 2, qid 0 00:22:12.543 [2024-11-20 16:23:43.507526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:22:12.543 [2024-11-20 16:23:43.507530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd700, cid 4, qid 0 00:22:12.543 [2024-11-20 16:23:43.507629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.543 [2024-11-20 16:23:43.507635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.543 [2024-11-20 16:23:43.507638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd700) on tqpair=0x229b690 00:22:12.543 [2024-11-20 16:23:43.507647] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:12.543 [2024-11-20 16:23:43.507652] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:12.543 [2024-11-20 16:23:43.507662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229b690) 00:22:12.543 [2024-11-20 16:23:43.507671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.543 [2024-11-20 16:23:43.507680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd700, cid 4, qid 0 00:22:12.543 [2024-11-20 16:23:43.507753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.543 [2024-11-20 16:23:43.507759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.543 [2024-11-20 16:23:43.507762] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507765] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229b690): datao=0, datal=4096, cccid=4 00:22:12.543 [2024-11-20 16:23:43.507769] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fd700) on tqpair(0x229b690): expected_datao=0, payload_size=4096 00:22:12.543 [2024-11-20 16:23:43.507773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507779] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507782] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.543 [2024-11-20 16:23:43.507802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.543 [2024-11-20 16:23:43.507805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd700) on tqpair=0x229b690 00:22:12.543 [2024-11-20 16:23:43.507818] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:12.543 [2024-11-20 16:23:43.507838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229b690) 00:22:12.543 [2024-11-20 16:23:43.507847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.543 [2024-11-20 16:23:43.507853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.507859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x229b690) 00:22:12.543 [2024-11-20 16:23:43.507864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.543 [2024-11-20 16:23:43.507877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd700, cid 4, qid 0 00:22:12.543 [2024-11-20 16:23:43.507882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd880, cid 5, qid 0 00:22:12.543 [2024-11-20 16:23:43.507988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.543 [2024-11-20 16:23:43.507994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.543 [2024-11-20 16:23:43.507997] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.508000] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229b690): datao=0, datal=1024, cccid=4 00:22:12.543 [2024-11-20 16:23:43.508004] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fd700) on tqpair(0x229b690): expected_datao=0, payload_size=1024 00:22:12.543 [2024-11-20 16:23:43.508008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.508013] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.543 [2024-11-20 16:23:43.508016] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.508021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.544 [2024-11-20 16:23:43.508026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.544 [2024-11-20 16:23:43.508029] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.508032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd880) on tqpair=0x229b690 00:22:12.544 [2024-11-20 16:23:43.549298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.544 [2024-11-20 16:23:43.549311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.544 [2024-11-20 16:23:43.549315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.549331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd700) on tqpair=0x229b690 00:22:12.544 [2024-11-20 16:23:43.549342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.549346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229b690) 00:22:12.544 [2024-11-20 16:23:43.549353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.544 [2024-11-20 16:23:43.549370] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd700, cid 4, qid 0 00:22:12.544 [2024-11-20 16:23:43.549477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.544 [2024-11-20 16:23:43.549482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.544 [2024-11-20 16:23:43.549485] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.549492] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229b690): datao=0, datal=3072, cccid=4 00:22:12.544 [2024-11-20 16:23:43.549496] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fd700) on tqpair(0x229b690): expected_datao=0, payload_size=3072 00:22:12.544 [2024-11-20 16:23:43.549500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.549506] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.549509] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.549532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.544 [2024-11-20 16:23:43.549537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.544 [2024-11-20 16:23:43.549540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.549543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd700) on tqpair=0x229b690 00:22:12.544 [2024-11-20 16:23:43.549551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.549554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229b690) 00:22:12.544 [2024-11-20 16:23:43.549560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.544 [2024-11-20 16:23:43.549574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd700, cid 4, qid 0 00:22:12.544 [2024-11-20 16:23:43.549646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.544 [2024-11-20 16:23:43.549652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.544 [2024-11-20 16:23:43.549655] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.549658] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229b690): datao=0, datal=8, cccid=4 00:22:12.544 [2024-11-20 16:23:43.549662] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fd700) on tqpair(0x229b690): expected_datao=0, payload_size=8 00:22:12.544 [2024-11-20 16:23:43.549666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.549671] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.549674] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.590332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.544 [2024-11-20 16:23:43.590342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.544 [2024-11-20 16:23:43.590346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.544 [2024-11-20 16:23:43.590349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd700) on tqpair=0x229b690 00:22:12.544 ===================================================== 00:22:12.544 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:12.544 ===================================================== 00:22:12.544 Controller Capabilities/Features 00:22:12.544 ================================ 00:22:12.544 Vendor ID: 0000 00:22:12.544 Subsystem Vendor ID: 0000 00:22:12.544 Serial Number: .................... 00:22:12.544 Model Number: ........................................ 00:22:12.544 Firmware Version: 25.01 00:22:12.544 Recommended Arb Burst: 0 00:22:12.544 IEEE OUI Identifier: 00 00 00 00:22:12.544 Multi-path I/O 00:22:12.544 May have multiple subsystem ports: No 00:22:12.544 May have multiple controllers: No 00:22:12.544 Associated with SR-IOV VF: No 00:22:12.544 Max Data Transfer Size: 131072 00:22:12.544 Max Number of Namespaces: 0 00:22:12.544 Max Number of I/O Queues: 1024 00:22:12.544 NVMe Specification Version (VS): 1.3 00:22:12.544 NVMe Specification Version (Identify): 1.3 00:22:12.544 Maximum Queue Entries: 128 00:22:12.544 Contiguous Queues Required: Yes 00:22:12.544 Arbitration Mechanisms Supported 00:22:12.544 Weighted Round Robin: Not Supported 00:22:12.544 Vendor Specific: Not Supported 00:22:12.544 Reset Timeout: 15000 ms 00:22:12.544 Doorbell Stride: 4 bytes 00:22:12.544 NVM Subsystem Reset: Not Supported 00:22:12.544 Command Sets Supported 00:22:12.544 NVM Command Set: Supported 00:22:12.544 Boot Partition: Not Supported 00:22:12.544 Memory Page Size Minimum: 4096 bytes 00:22:12.544 Memory Page Size Maximum: 4096 bytes 00:22:12.544 Persistent Memory Region: Not Supported 00:22:12.544 Optional Asynchronous Events Supported 00:22:12.544 Namespace Attribute Notices: Not Supported 00:22:12.544 Firmware Activation Notices: Not Supported 00:22:12.544 ANA Change Notices: Not Supported 00:22:12.544 PLE Aggregate Log Change Notices: Not Supported 00:22:12.544 LBA Status Info Alert Notices: Not Supported 00:22:12.544 EGE Aggregate Log Change Notices: Not Supported 00:22:12.544 Normal NVM Subsystem Shutdown event: Not Supported 00:22:12.544 Zone Descriptor Change Notices: Not Supported 00:22:12.544 Discovery Log Change Notices: Supported 00:22:12.544 Controller Attributes 00:22:12.544 128-bit Host Identifier: Not Supported 00:22:12.544 Non-Operational Permissive Mode: Not Supported 00:22:12.544 NVM Sets: Not Supported 00:22:12.544 Read Recovery Levels: Not Supported 00:22:12.544 Endurance Groups: Not Supported 00:22:12.544 Predictable Latency Mode: Not Supported 00:22:12.544 Traffic Based Keep ALive: Not Supported 00:22:12.544 Namespace Granularity: Not Supported 00:22:12.544 SQ Associations: Not Supported 00:22:12.544 UUID List: Not Supported 00:22:12.544 Multi-Domain Subsystem: Not Supported 00:22:12.544 Fixed Capacity Management: Not Supported 00:22:12.544 Variable Capacity Management: Not Supported 00:22:12.544 Delete Endurance Group: Not Supported 00:22:12.544 Delete NVM Set: Not Supported 00:22:12.544 Extended LBA Formats Supported: Not Supported 00:22:12.544 Flexible Data Placement Supported: Not Supported 00:22:12.544 00:22:12.544 Controller Memory Buffer Support 00:22:12.544 ================================ 00:22:12.544 Supported: No 00:22:12.544 00:22:12.544 Persistent Memory Region Support 00:22:12.544 ================================ 00:22:12.544 Supported: No 00:22:12.544 00:22:12.544 Admin Command Set Attributes 00:22:12.544 ============================ 00:22:12.544 Security Send/Receive: Not Supported 00:22:12.544 Format NVM: Not Supported 00:22:12.544 Firmware Activate/Download: Not Supported 00:22:12.544 Namespace Management: Not Supported 00:22:12.544 Device Self-Test: Not Supported 00:22:12.544 Directives: Not Supported 00:22:12.544 NVMe-MI: Not Supported 00:22:12.544 Virtualization Management: Not Supported 00:22:12.544 Doorbell Buffer Config: Not Supported 00:22:12.544 Get LBA Status Capability: Not Supported 00:22:12.544 Command & Feature Lockdown Capability: Not Supported 00:22:12.544 Abort Command Limit: 1 00:22:12.544 Async Event Request Limit: 4 00:22:12.544 Number of Firmware Slots: N/A 00:22:12.544 Firmware Slot 1 Read-Only: N/A 00:22:12.544 Firmware Activation Without Reset: N/A 00:22:12.544 Multiple Update Detection Support: N/A 00:22:12.544 Firmware Update Granularity: No Information Provided 00:22:12.544 Per-Namespace SMART Log: No 00:22:12.544 Asymmetric Namespace Access Log Page: Not Supported 00:22:12.544 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:12.544 Command Effects Log Page: Not Supported 00:22:12.544 Get Log Page Extended Data: Supported 00:22:12.544 Telemetry Log Pages: Not Supported 00:22:12.544 Persistent Event Log Pages: Not Supported 00:22:12.544 Supported Log Pages Log Page: May Support 00:22:12.544 Commands Supported & Effects Log Page: Not Supported 00:22:12.544 Feature Identifiers & Effects Log Page:May Support 00:22:12.544 NVMe-MI Commands & Effects Log Page: May Support 00:22:12.544 Data Area 4 for Telemetry Log: Not Supported 00:22:12.544 Error Log Page Entries Supported: 128 00:22:12.544 Keep Alive: Not Supported 00:22:12.544 00:22:12.544 NVM Command Set Attributes 00:22:12.544 ========================== 00:22:12.544 Submission Queue Entry Size 00:22:12.544 Max: 1 00:22:12.544 Min: 1 00:22:12.544 Completion Queue Entry Size 00:22:12.544 Max: 1 00:22:12.544 Min: 1 00:22:12.544 Number of Namespaces: 0 00:22:12.544 Compare Command: Not Supported 00:22:12.545 Write Uncorrectable Command: Not Supported 00:22:12.545 Dataset Management Command: Not Supported 00:22:12.545 Write Zeroes Command: Not Supported 00:22:12.545 Set Features Save Field: Not Supported 00:22:12.545 Reservations: Not Supported 00:22:12.545 Timestamp: Not Supported 00:22:12.545 Copy: Not Supported 00:22:12.545 Volatile Write Cache: Not Present 00:22:12.545 Atomic Write Unit (Normal): 1 00:22:12.545 Atomic Write Unit (PFail): 1 00:22:12.545 Atomic Compare & Write Unit: 1 00:22:12.545 Fused Compare & Write: Supported 00:22:12.545 Scatter-Gather List 00:22:12.545 SGL Command Set: Supported 00:22:12.545 SGL Keyed: Supported 00:22:12.545 SGL Bit Bucket Descriptor: Not Supported 00:22:12.545 SGL Metadata Pointer: Not Supported 00:22:12.545 Oversized SGL: Not Supported 00:22:12.545 SGL Metadata Address: Not Supported 00:22:12.545 SGL Offset: Supported 00:22:12.545 Transport SGL Data Block: Not Supported 00:22:12.545 Replay Protected Memory Block: Not Supported 00:22:12.545 00:22:12.545 Firmware Slot Information 00:22:12.545 ========================= 00:22:12.545 Active slot: 0 00:22:12.545 00:22:12.545 00:22:12.545 Error Log 00:22:12.545 ========= 00:22:12.545 00:22:12.545 Active Namespaces 00:22:12.545 ================= 00:22:12.545 Discovery Log Page 00:22:12.545 ================== 00:22:12.545 Generation Counter: 2 00:22:12.545 Number of Records: 2 00:22:12.545 Record Format: 0 00:22:12.545 00:22:12.545 Discovery Log Entry 0 00:22:12.545 ---------------------- 00:22:12.545 Transport Type: 3 (TCP) 00:22:12.545 Address Family: 1 (IPv4) 00:22:12.545 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:12.545 Entry Flags: 00:22:12.545 Duplicate Returned Information: 1 00:22:12.545 Explicit Persistent Connection Support for Discovery: 1 00:22:12.545 Transport Requirements: 00:22:12.545 Secure Channel: Not Required 00:22:12.545 Port ID: 0 (0x0000) 00:22:12.545 Controller ID: 65535 (0xffff) 00:22:12.545 Admin Max SQ Size: 128 00:22:12.545 Transport Service Identifier: 4420 00:22:12.545 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:12.545 Transport Address: 10.0.0.2 00:22:12.545 Discovery Log Entry 1 00:22:12.545 ---------------------- 00:22:12.545 Transport Type: 3 (TCP) 00:22:12.545 Address Family: 1 (IPv4) 00:22:12.545 Subsystem Type: 2 (NVM Subsystem) 00:22:12.545 Entry Flags: 00:22:12.545 Duplicate Returned Information: 0 00:22:12.545 Explicit Persistent Connection Support for Discovery: 0 00:22:12.545 Transport Requirements: 00:22:12.545 Secure Channel: Not Required 00:22:12.545 Port ID: 0 (0x0000) 00:22:12.545 Controller ID: 65535 (0xffff) 00:22:12.545 Admin Max SQ Size: 128 00:22:12.545 Transport Service Identifier: 4420 00:22:12.545 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:12.545 Transport Address: 10.0.0.2 [2024-11-20 16:23:43.590432] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:12.545 [2024-11-20 16:23:43.590444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b690 00:22:12.545 [2024-11-20 16:23:43.590450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.545 [2024-11-20 16:23:43.590454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd280) on tqpair=0x229b690 00:22:12.545 [2024-11-20 16:23:43.590459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.545 [2024-11-20 16:23:43.590463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd400) on tqpair=0x229b690 00:22:12.545 [2024-11-20 16:23:43.590467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.545 [2024-11-20 16:23:43.590471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b690 00:22:12.545 [2024-11-20 16:23:43.590475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.545 [2024-11-20 16:23:43.590486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b690) 00:22:12.545 [2024-11-20 16:23:43.590499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.545 [2024-11-20 16:23:43.590512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:22:12.545 [2024-11-20 16:23:43.590572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.545 [2024-11-20 16:23:43.590578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.545 [2024-11-20 16:23:43.590581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b690 00:22:12.545 [2024-11-20 16:23:43.590590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b690) 00:22:12.545 [2024-11-20 16:23:43.590602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.545 [2024-11-20 16:23:43.590614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:22:12.545 [2024-11-20 16:23:43.590692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.545 [2024-11-20 16:23:43.590698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.545 [2024-11-20 16:23:43.590701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b690 00:22:12.545 [2024-11-20 16:23:43.590708] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:12.545 [2024-11-20 16:23:43.590712] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:12.545 [2024-11-20 16:23:43.590720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b690) 00:22:12.545 [2024-11-20 16:23:43.590732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.545 [2024-11-20 16:23:43.590741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:22:12.545 [2024-11-20 16:23:43.590804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.545 [2024-11-20 16:23:43.590810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.545 [2024-11-20 16:23:43.590813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b690 00:22:12.545 [2024-11-20 16:23:43.590824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b690) 00:22:12.545 [2024-11-20 16:23:43.590837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.545 [2024-11-20 16:23:43.590846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:22:12.545 [2024-11-20 16:23:43.590913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.545 [2024-11-20 16:23:43.590919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.545 [2024-11-20 16:23:43.590924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b690 00:22:12.545 [2024-11-20 16:23:43.590935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.590941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b690) 00:22:12.545 [2024-11-20 16:23:43.590947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.545 [2024-11-20 16:23:43.590957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:22:12.545 [2024-11-20 16:23:43.591021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.545 [2024-11-20 16:23:43.591027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.545 [2024-11-20 16:23:43.591030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.591033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b690 00:22:12.545 [2024-11-20 16:23:43.591041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.591044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.545 [2024-11-20 16:23:43.591048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b690) 00:22:12.545 [2024-11-20 16:23:43.591053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.545 [2024-11-20 16:23:43.591062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:22:12.545 [2024-11-20 16:23:43.591121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.545 [2024-11-20 16:23:43.591127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.545 [2024-11-20 16:23:43.591130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.591133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b690 00:22:12.546 [2024-11-20 16:23:43.591141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.591145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.591148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b690) 00:22:12.546 [2024-11-20 16:23:43.591153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.546 [2024-11-20 16:23:43.591162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:22:12.546 [2024-11-20 16:23:43.595213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.546 [2024-11-20 16:23:43.595221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.546 [2024-11-20 16:23:43.595224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.595228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b690 00:22:12.546 [2024-11-20 16:23:43.595237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.595240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.595243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b690) 00:22:12.546 [2024-11-20 16:23:43.595249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.546 [2024-11-20 16:23:43.595260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:22:12.546 [2024-11-20 16:23:43.595335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.546 [2024-11-20 16:23:43.595341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.546 [2024-11-20 16:23:43.595344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.595356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b690 00:22:12.546 [2024-11-20 16:23:43.595363] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:22:12.546 00:22:12.546 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:12.546 [2024-11-20 16:23:43.631965] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:22:12.546 [2024-11-20 16:23:43.632004] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001989 ] 00:22:12.546 [2024-11-20 16:23:43.673327] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:12.546 [2024-11-20 16:23:43.673365] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:12.546 [2024-11-20 16:23:43.673371] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:12.546 [2024-11-20 16:23:43.673383] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:12.546 [2024-11-20 16:23:43.673391] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:12.546 [2024-11-20 16:23:43.673746] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:12.546 [2024-11-20 16:23:43.673773] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9bf690 0 00:22:12.546 [2024-11-20 16:23:43.684214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:12.546 [2024-11-20 16:23:43.684229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:12.546 [2024-11-20 16:23:43.684233] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:12.546 [2024-11-20 16:23:43.684236] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:12.546 [2024-11-20 16:23:43.684262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.684267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.684270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bf690) 00:22:12.546 [2024-11-20 16:23:43.684280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:12.546 [2024-11-20 16:23:43.684298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21100, cid 0, qid 0 00:22:12.546 [2024-11-20 16:23:43.695214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.546 [2024-11-20 16:23:43.695222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.546 [2024-11-20 16:23:43.695225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.695229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21100) on tqpair=0x9bf690 00:22:12.546 [2024-11-20 16:23:43.695240] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:12.546 [2024-11-20 16:23:43.695246] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:12.546 [2024-11-20 16:23:43.695251] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:12.546 [2024-11-20 16:23:43.695261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.695265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.695268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bf690) 00:22:12.546 [2024-11-20 16:23:43.695278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.546 [2024-11-20 16:23:43.695291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21100, cid 0, qid 0 00:22:12.546 [2024-11-20 16:23:43.695454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.546 [2024-11-20 16:23:43.695460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.546 [2024-11-20 16:23:43.695463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.695466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21100) on tqpair=0x9bf690 00:22:12.546 [2024-11-20 16:23:43.695470] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:12.546 [2024-11-20 16:23:43.695477] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:12.546 [2024-11-20 16:23:43.695483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.695486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.695489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bf690) 00:22:12.546 [2024-11-20 16:23:43.695495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.546 [2024-11-20 16:23:43.695505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21100, cid 0, qid 0 00:22:12.546 [2024-11-20 16:23:43.695597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.546 [2024-11-20 16:23:43.695603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.546 [2024-11-20 16:23:43.695605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.695609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21100) on tqpair=0x9bf690 00:22:12.546 [2024-11-20 16:23:43.695613] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:12.546 [2024-11-20 16:23:43.695620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:12.546 [2024-11-20 16:23:43.695625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.695628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.695631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bf690) 00:22:12.546 [2024-11-20 16:23:43.695637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.546 [2024-11-20 16:23:43.695646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21100, cid 0, qid 0 00:22:12.546 [2024-11-20 16:23:43.695748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.546 [2024-11-20 16:23:43.695754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.546 [2024-11-20 16:23:43.695757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.695760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21100) on tqpair=0x9bf690 00:22:12.546 [2024-11-20 16:23:43.695765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:12.546 [2024-11-20 16:23:43.695772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.695776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.695779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bf690) 00:22:12.546 [2024-11-20 16:23:43.695785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.546 [2024-11-20 16:23:43.695794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21100, cid 0, qid 0 00:22:12.546 [2024-11-20 16:23:43.695900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.546 [2024-11-20 16:23:43.695906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.546 [2024-11-20 16:23:43.695909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.695912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21100) on tqpair=0x9bf690 00:22:12.546 [2024-11-20 16:23:43.695915] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:12.546 [2024-11-20 16:23:43.695920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:12.546 [2024-11-20 16:23:43.695926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:12.546 [2024-11-20 16:23:43.696033] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:12.546 [2024-11-20 16:23:43.696037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:12.546 [2024-11-20 16:23:43.696044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.696047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.546 [2024-11-20 16:23:43.696050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bf690) 00:22:12.546 [2024-11-20 16:23:43.696055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.547 [2024-11-20 16:23:43.696065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21100, cid 0, qid 0 00:22:12.547 [2024-11-20 16:23:43.696127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.547 [2024-11-20 16:23:43.696133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.547 [2024-11-20 16:23:43.696136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21100) on tqpair=0x9bf690 00:22:12.547 [2024-11-20 16:23:43.696143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:12.547 [2024-11-20 16:23:43.696151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bf690) 00:22:12.547 [2024-11-20 16:23:43.696163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.547 [2024-11-20 16:23:43.696172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21100, cid 0, qid 0 00:22:12.547 [2024-11-20 16:23:43.696280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.547 [2024-11-20 16:23:43.696287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.547 [2024-11-20 16:23:43.696289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21100) on tqpair=0x9bf690 00:22:12.547 [2024-11-20 16:23:43.696296] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:12.547 [2024-11-20 16:23:43.696300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:12.547 [2024-11-20 16:23:43.696307] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:12.547 [2024-11-20 16:23:43.696316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:12.547 [2024-11-20 16:23:43.696327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bf690) 00:22:12.547 [2024-11-20 16:23:43.696336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.547 [2024-11-20 16:23:43.696346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21100, cid 0, qid 0 00:22:12.547 [2024-11-20 16:23:43.696442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.547 [2024-11-20 16:23:43.696448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.547 [2024-11-20 16:23:43.696451] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696454] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bf690): datao=0, datal=4096, cccid=0 00:22:12.547 [2024-11-20 16:23:43.696458] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa21100) on tqpair(0x9bf690): expected_datao=0, payload_size=4096 00:22:12.547 [2024-11-20 16:23:43.696462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696468] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696472] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.547 [2024-11-20 16:23:43.696537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.547 [2024-11-20 16:23:43.696540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21100) on tqpair=0x9bf690 00:22:12.547 [2024-11-20 16:23:43.696550] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:12.547 [2024-11-20 16:23:43.696554] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:12.547 [2024-11-20 16:23:43.696558] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:12.547 [2024-11-20 16:23:43.696563] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:12.547 [2024-11-20 16:23:43.696567] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:12.547 [2024-11-20 16:23:43.696571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:12.547 [2024-11-20 16:23:43.696580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:12.547 [2024-11-20 16:23:43.696585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bf690) 00:22:12.547 [2024-11-20 16:23:43.696597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:12.547 [2024-11-20 16:23:43.696608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21100, cid 0, qid 0 00:22:12.547 [2024-11-20 16:23:43.696684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.547 [2024-11-20 16:23:43.696690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.547 [2024-11-20 16:23:43.696693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21100) on tqpair=0x9bf690 00:22:12.547 [2024-11-20 16:23:43.696701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bf690) 00:22:12.547 [2024-11-20 16:23:43.696714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.547 [2024-11-20 16:23:43.696720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9bf690) 00:22:12.547 [2024-11-20 16:23:43.696730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.547 [2024-11-20 16:23:43.696735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9bf690) 00:22:12.547 [2024-11-20 16:23:43.696746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.547 [2024-11-20 16:23:43.696751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bf690) 00:22:12.547 [2024-11-20 16:23:43.696761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.547 [2024-11-20 16:23:43.696765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:12.547 [2024-11-20 16:23:43.696773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:12.547 [2024-11-20 16:23:43.696779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bf690) 00:22:12.547 [2024-11-20 16:23:43.696787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.547 [2024-11-20 16:23:43.696798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21100, cid 0, qid 0 00:22:12.547 [2024-11-20 16:23:43.696802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21280, cid 1, qid 0 00:22:12.547 [2024-11-20 16:23:43.696806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21400, cid 2, qid 0 00:22:12.547 [2024-11-20 16:23:43.696810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21580, cid 3, qid 0 00:22:12.547 [2024-11-20 16:23:43.696814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21700, cid 4, qid 0 00:22:12.547 [2024-11-20 16:23:43.696936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.547 [2024-11-20 16:23:43.696942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.547 [2024-11-20 16:23:43.696945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.547 [2024-11-20 16:23:43.696948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21700) on tqpair=0x9bf690 00:22:12.547 [2024-11-20 16:23:43.696954] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:12.548 [2024-11-20 16:23:43.696958] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.696965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.696970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.696977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.696980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.696983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bf690) 00:22:12.548 [2024-11-20 16:23:43.696988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:12.548 [2024-11-20 16:23:43.696997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21700, cid 4, qid 0 00:22:12.548 [2024-11-20 16:23:43.697059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.548 [2024-11-20 16:23:43.697065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.548 [2024-11-20 16:23:43.697068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21700) on tqpair=0x9bf690 00:22:12.548 [2024-11-20 16:23:43.697121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.697131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.697138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bf690) 00:22:12.548 [2024-11-20 16:23:43.697146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.548 [2024-11-20 16:23:43.697156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21700, cid 4, qid 0 00:22:12.548 [2024-11-20 16:23:43.697244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.548 [2024-11-20 16:23:43.697250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.548 [2024-11-20 16:23:43.697253] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697256] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bf690): datao=0, datal=4096, cccid=4 00:22:12.548 [2024-11-20 16:23:43.697260] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa21700) on tqpair(0x9bf690): expected_datao=0, payload_size=4096 00:22:12.548 [2024-11-20 16:23:43.697264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697269] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697272] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.548 [2024-11-20 16:23:43.697286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.548 [2024-11-20 16:23:43.697289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21700) on tqpair=0x9bf690 00:22:12.548 [2024-11-20 16:23:43.697300] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:12.548 [2024-11-20 16:23:43.697308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.697316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.697322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bf690) 00:22:12.548 [2024-11-20 16:23:43.697331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.548 [2024-11-20 16:23:43.697341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21700, cid 4, qid 0 00:22:12.548 [2024-11-20 16:23:43.697444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.548 [2024-11-20 16:23:43.697450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.548 [2024-11-20 16:23:43.697453] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697456] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bf690): datao=0, datal=4096, cccid=4 00:22:12.548 [2024-11-20 16:23:43.697460] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa21700) on tqpair(0x9bf690): expected_datao=0, payload_size=4096 00:22:12.548 [2024-11-20 16:23:43.697464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697469] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697473] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.548 [2024-11-20 16:23:43.697490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.548 [2024-11-20 16:23:43.697493] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21700) on tqpair=0x9bf690 00:22:12.548 [2024-11-20 16:23:43.697505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.697514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.697521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bf690) 00:22:12.548 [2024-11-20 16:23:43.697529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.548 [2024-11-20 16:23:43.697539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21700, cid 4, qid 0 00:22:12.548 [2024-11-20 16:23:43.697647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.548 [2024-11-20 16:23:43.697652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.548 [2024-11-20 16:23:43.697655] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697658] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bf690): datao=0, datal=4096, cccid=4 00:22:12.548 [2024-11-20 16:23:43.697662] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa21700) on tqpair(0x9bf690): expected_datao=0, payload_size=4096 00:22:12.548 [2024-11-20 16:23:43.697666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697671] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697674] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.548 [2024-11-20 16:23:43.697688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.548 [2024-11-20 16:23:43.697691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21700) on tqpair=0x9bf690 00:22:12.548 [2024-11-20 16:23:43.697701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.697708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.697715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.697720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.697726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.697731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.697736] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:12.548 [2024-11-20 16:23:43.697739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:12.548 [2024-11-20 16:23:43.697744] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:12.548 [2024-11-20 16:23:43.697755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bf690) 00:22:12.548 [2024-11-20 16:23:43.697764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.548 [2024-11-20 16:23:43.697770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9bf690) 00:22:12.548 [2024-11-20 16:23:43.697781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.548 [2024-11-20 16:23:43.697793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21700, cid 4, qid 0 00:22:12.548 [2024-11-20 16:23:43.697798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21880, cid 5, qid 0 00:22:12.548 [2024-11-20 16:23:43.697913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.548 [2024-11-20 16:23:43.697919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.548 [2024-11-20 16:23:43.697922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21700) on tqpair=0x9bf690 00:22:12.548 [2024-11-20 16:23:43.697931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.548 [2024-11-20 16:23:43.697936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.548 [2024-11-20 16:23:43.697939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21880) on tqpair=0x9bf690 00:22:12.548 [2024-11-20 16:23:43.697950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.548 [2024-11-20 16:23:43.697953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9bf690) 00:22:12.548 [2024-11-20 16:23:43.697958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.548 [2024-11-20 16:23:43.697968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21880, cid 5, qid 0 00:22:12.548 [2024-11-20 16:23:43.698062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.548 [2024-11-20 16:23:43.698068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.549 [2024-11-20 16:23:43.698071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21880) on tqpair=0x9bf690 00:22:12.549 [2024-11-20 16:23:43.698082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9bf690) 00:22:12.549 [2024-11-20 16:23:43.698091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.549 [2024-11-20 16:23:43.698101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21880, cid 5, qid 0 00:22:12.549 [2024-11-20 16:23:43.698162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.549 [2024-11-20 16:23:43.698168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.549 [2024-11-20 16:23:43.698171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21880) on tqpair=0x9bf690 00:22:12.549 [2024-11-20 16:23:43.698182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9bf690) 00:22:12.549 [2024-11-20 16:23:43.698191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.549 [2024-11-20 16:23:43.698200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21880, cid 5, qid 0 00:22:12.549 [2024-11-20 16:23:43.698264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.549 [2024-11-20 16:23:43.698270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.549 [2024-11-20 16:23:43.698273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21880) on tqpair=0x9bf690 00:22:12.549 [2024-11-20 16:23:43.698288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9bf690) 00:22:12.549 [2024-11-20 16:23:43.698298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.549 [2024-11-20 16:23:43.698303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bf690) 00:22:12.549 [2024-11-20 16:23:43.698312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.549 [2024-11-20 16:23:43.698318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x9bf690) 00:22:12.549 [2024-11-20 16:23:43.698326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.549 [2024-11-20 16:23:43.698332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9bf690) 00:22:12.549 [2024-11-20 16:23:43.698340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.549 [2024-11-20 16:23:43.698351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21880, cid 5, qid 0 00:22:12.549 [2024-11-20 16:23:43.698355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21700, cid 4, qid 0 00:22:12.549 [2024-11-20 16:23:43.698359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21a00, cid 6, qid 0 00:22:12.549 [2024-11-20 16:23:43.698363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21b80, cid 7, qid 0 00:22:12.549 [2024-11-20 16:23:43.698502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.549 [2024-11-20 16:23:43.698507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.549 [2024-11-20 16:23:43.698510] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698513] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bf690): datao=0, datal=8192, cccid=5 00:22:12.549 [2024-11-20 16:23:43.698517] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa21880) on tqpair(0x9bf690): expected_datao=0, payload_size=8192 00:22:12.549 [2024-11-20 16:23:43.698524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698561] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698565] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.549 [2024-11-20 16:23:43.698574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.549 [2024-11-20 16:23:43.698577] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698580] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bf690): datao=0, datal=512, cccid=4 00:22:12.549 [2024-11-20 16:23:43.698584] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa21700) on tqpair(0x9bf690): expected_datao=0, payload_size=512 00:22:12.549 [2024-11-20 16:23:43.698587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698593] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698596] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.549 [2024-11-20 16:23:43.698605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.549 [2024-11-20 16:23:43.698608] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698611] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bf690): datao=0, datal=512, cccid=6 00:22:12.549 [2024-11-20 16:23:43.698615] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa21a00) on tqpair(0x9bf690): expected_datao=0, payload_size=512 00:22:12.549 [2024-11-20 16:23:43.698618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698623] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698626] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.549 [2024-11-20 16:23:43.698636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.549 [2024-11-20 16:23:43.698639] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698641] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bf690): datao=0, datal=4096, cccid=7 00:22:12.549 [2024-11-20 16:23:43.698645] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa21b80) on tqpair(0x9bf690): expected_datao=0, payload_size=4096 00:22:12.549 [2024-11-20 16:23:43.698649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698654] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698657] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.549 [2024-11-20 16:23:43.698669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.549 [2024-11-20 16:23:43.698672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21880) on tqpair=0x9bf690 00:22:12.549 [2024-11-20 16:23:43.698685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.549 [2024-11-20 16:23:43.698690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.549 [2024-11-20 16:23:43.698693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21700) on tqpair=0x9bf690 00:22:12.549 [2024-11-20 16:23:43.698704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.549 [2024-11-20 16:23:43.698709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.549 [2024-11-20 16:23:43.698712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21a00) on tqpair=0x9bf690 00:22:12.549 [2024-11-20 16:23:43.698722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.549 [2024-11-20 16:23:43.698727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.549 [2024-11-20 16:23:43.698730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.549 [2024-11-20 16:23:43.698733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21b80) on tqpair=0x9bf690 00:22:12.549 ===================================================== 00:22:12.549 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.549 ===================================================== 00:22:12.549 Controller Capabilities/Features 00:22:12.549 ================================ 00:22:12.549 Vendor ID: 8086 00:22:12.549 Subsystem Vendor ID: 8086 00:22:12.549 Serial Number: SPDK00000000000001 00:22:12.549 Model Number: SPDK bdev Controller 00:22:12.549 Firmware Version: 25.01 00:22:12.549 Recommended Arb Burst: 6 00:22:12.549 IEEE OUI Identifier: e4 d2 5c 00:22:12.549 Multi-path I/O 00:22:12.549 May have multiple subsystem ports: Yes 00:22:12.549 May have multiple controllers: Yes 00:22:12.549 Associated with SR-IOV VF: No 00:22:12.549 Max Data Transfer Size: 131072 00:22:12.549 Max Number of Namespaces: 32 00:22:12.549 Max Number of I/O Queues: 127 00:22:12.549 NVMe Specification Version (VS): 1.3 00:22:12.549 NVMe Specification Version (Identify): 1.3 00:22:12.549 Maximum Queue Entries: 128 00:22:12.549 Contiguous Queues Required: Yes 00:22:12.549 Arbitration Mechanisms Supported 00:22:12.549 Weighted Round Robin: Not Supported 00:22:12.549 Vendor Specific: Not Supported 00:22:12.549 Reset Timeout: 15000 ms 00:22:12.549 Doorbell Stride: 4 bytes 00:22:12.549 NVM Subsystem Reset: Not Supported 00:22:12.549 Command Sets Supported 00:22:12.549 NVM Command Set: Supported 00:22:12.549 Boot Partition: Not Supported 00:22:12.549 Memory Page Size Minimum: 4096 bytes 00:22:12.549 Memory Page Size Maximum: 4096 bytes 00:22:12.549 Persistent Memory Region: Not Supported 00:22:12.549 Optional Asynchronous Events Supported 00:22:12.549 Namespace Attribute Notices: Supported 00:22:12.549 Firmware Activation Notices: Not Supported 00:22:12.549 ANA Change Notices: Not Supported 00:22:12.549 PLE Aggregate Log Change Notices: Not Supported 00:22:12.550 LBA Status Info Alert Notices: Not Supported 00:22:12.550 EGE Aggregate Log Change Notices: Not Supported 00:22:12.550 Normal NVM Subsystem Shutdown event: Not Supported 00:22:12.550 Zone Descriptor Change Notices: Not Supported 00:22:12.550 Discovery Log Change Notices: Not Supported 00:22:12.550 Controller Attributes 00:22:12.550 128-bit Host Identifier: Supported 00:22:12.550 Non-Operational Permissive Mode: Not Supported 00:22:12.550 NVM Sets: Not Supported 00:22:12.550 Read Recovery Levels: Not Supported 00:22:12.550 Endurance Groups: Not Supported 00:22:12.550 Predictable Latency Mode: Not Supported 00:22:12.550 Traffic Based Keep ALive: Not Supported 00:22:12.550 Namespace Granularity: Not Supported 00:22:12.550 SQ Associations: Not Supported 00:22:12.550 UUID List: Not Supported 00:22:12.550 Multi-Domain Subsystem: Not Supported 00:22:12.550 Fixed Capacity Management: Not Supported 00:22:12.550 Variable Capacity Management: Not Supported 00:22:12.550 Delete Endurance Group: Not Supported 00:22:12.550 Delete NVM Set: Not Supported 00:22:12.550 Extended LBA Formats Supported: Not Supported 00:22:12.550 Flexible Data Placement Supported: Not Supported 00:22:12.550 00:22:12.550 Controller Memory Buffer Support 00:22:12.550 ================================ 00:22:12.550 Supported: No 00:22:12.550 00:22:12.550 Persistent Memory Region Support 00:22:12.550 ================================ 00:22:12.550 Supported: No 00:22:12.550 00:22:12.550 Admin Command Set Attributes 00:22:12.550 ============================ 00:22:12.550 Security Send/Receive: Not Supported 00:22:12.550 Format NVM: Not Supported 00:22:12.550 Firmware Activate/Download: Not Supported 00:22:12.550 Namespace Management: Not Supported 00:22:12.550 Device Self-Test: Not Supported 00:22:12.550 Directives: Not Supported 00:22:12.550 NVMe-MI: Not Supported 00:22:12.550 Virtualization Management: Not Supported 00:22:12.550 Doorbell Buffer Config: Not Supported 00:22:12.550 Get LBA Status Capability: Not Supported 00:22:12.550 Command & Feature Lockdown Capability: Not Supported 00:22:12.550 Abort Command Limit: 4 00:22:12.550 Async Event Request Limit: 4 00:22:12.550 Number of Firmware Slots: N/A 00:22:12.550 Firmware Slot 1 Read-Only: N/A 00:22:12.550 Firmware Activation Without Reset: N/A 00:22:12.550 Multiple Update Detection Support: N/A 00:22:12.550 Firmware Update Granularity: No Information Provided 00:22:12.550 Per-Namespace SMART Log: No 00:22:12.550 Asymmetric Namespace Access Log Page: Not Supported 00:22:12.550 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:12.550 Command Effects Log Page: Supported 00:22:12.550 Get Log Page Extended Data: Supported 00:22:12.550 Telemetry Log Pages: Not Supported 00:22:12.550 Persistent Event Log Pages: Not Supported 00:22:12.550 Supported Log Pages Log Page: May Support 00:22:12.550 Commands Supported & Effects Log Page: Not Supported 00:22:12.550 Feature Identifiers & Effects Log Page:May Support 00:22:12.550 NVMe-MI Commands & Effects Log Page: May Support 00:22:12.550 Data Area 4 for Telemetry Log: Not Supported 00:22:12.550 Error Log Page Entries Supported: 128 00:22:12.550 Keep Alive: Supported 00:22:12.550 Keep Alive Granularity: 10000 ms 00:22:12.550 00:22:12.550 NVM Command Set Attributes 00:22:12.550 ========================== 00:22:12.550 Submission Queue Entry Size 00:22:12.550 Max: 64 00:22:12.550 Min: 64 00:22:12.550 Completion Queue Entry Size 00:22:12.550 Max: 16 00:22:12.550 Min: 16 00:22:12.550 Number of Namespaces: 32 00:22:12.550 Compare Command: Supported 00:22:12.550 Write Uncorrectable Command: Not Supported 00:22:12.550 Dataset Management Command: Supported 00:22:12.550 Write Zeroes Command: Supported 00:22:12.550 Set Features Save Field: Not Supported 00:22:12.550 Reservations: Supported 00:22:12.550 Timestamp: Not Supported 00:22:12.550 Copy: Supported 00:22:12.550 Volatile Write Cache: Present 00:22:12.550 Atomic Write Unit (Normal): 1 00:22:12.550 Atomic Write Unit (PFail): 1 00:22:12.550 Atomic Compare & Write Unit: 1 00:22:12.550 Fused Compare & Write: Supported 00:22:12.550 Scatter-Gather List 00:22:12.550 SGL Command Set: Supported 00:22:12.550 SGL Keyed: Supported 00:22:12.550 SGL Bit Bucket Descriptor: Not Supported 00:22:12.550 SGL Metadata Pointer: Not Supported 00:22:12.550 Oversized SGL: Not Supported 00:22:12.550 SGL Metadata Address: Not Supported 00:22:12.550 SGL Offset: Supported 00:22:12.550 Transport SGL Data Block: Not Supported 00:22:12.550 Replay Protected Memory Block: Not Supported 00:22:12.550 00:22:12.550 Firmware Slot Information 00:22:12.550 ========================= 00:22:12.550 Active slot: 1 00:22:12.550 Slot 1 Firmware Revision: 25.01 00:22:12.550 00:22:12.550 00:22:12.550 Commands Supported and Effects 00:22:12.550 ============================== 00:22:12.550 Admin Commands 00:22:12.550 -------------- 00:22:12.550 Get Log Page (02h): Supported 00:22:12.550 Identify (06h): Supported 00:22:12.550 Abort (08h): Supported 00:22:12.550 Set Features (09h): Supported 00:22:12.550 Get Features (0Ah): Supported 00:22:12.550 Asynchronous Event Request (0Ch): Supported 00:22:12.550 Keep Alive (18h): Supported 00:22:12.550 I/O Commands 00:22:12.550 ------------ 00:22:12.550 Flush (00h): Supported LBA-Change 00:22:12.550 Write (01h): Supported LBA-Change 00:22:12.550 Read (02h): Supported 00:22:12.550 Compare (05h): Supported 00:22:12.550 Write Zeroes (08h): Supported LBA-Change 00:22:12.550 Dataset Management (09h): Supported LBA-Change 00:22:12.550 Copy (19h): Supported LBA-Change 00:22:12.550 00:22:12.550 Error Log 00:22:12.550 ========= 00:22:12.550 00:22:12.550 Arbitration 00:22:12.550 =========== 00:22:12.550 Arbitration Burst: 1 00:22:12.550 00:22:12.550 Power Management 00:22:12.550 ================ 00:22:12.550 Number of Power States: 1 00:22:12.550 Current Power State: Power State #0 00:22:12.550 Power State #0: 00:22:12.550 Max Power: 0.00 W 00:22:12.550 Non-Operational State: Operational 00:22:12.550 Entry Latency: Not Reported 00:22:12.550 Exit Latency: Not Reported 00:22:12.550 Relative Read Throughput: 0 00:22:12.550 Relative Read Latency: 0 00:22:12.550 Relative Write Throughput: 0 00:22:12.550 Relative Write Latency: 0 00:22:12.550 Idle Power: Not Reported 00:22:12.550 Active Power: Not Reported 00:22:12.550 Non-Operational Permissive Mode: Not Supported 00:22:12.550 00:22:12.550 Health Information 00:22:12.550 ================== 00:22:12.550 Critical Warnings: 00:22:12.550 Available Spare Space: OK 00:22:12.550 Temperature: OK 00:22:12.550 Device Reliability: OK 00:22:12.550 Read Only: No 00:22:12.550 Volatile Memory Backup: OK 00:22:12.550 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:12.550 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:12.550 Available Spare: 0% 00:22:12.550 Available Spare Threshold: 0% 00:22:12.550 Life Percentage Used:[2024-11-20 16:23:43.698812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.550 [2024-11-20 16:23:43.698816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9bf690) 00:22:12.550 [2024-11-20 16:23:43.698822] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.550 [2024-11-20 16:23:43.698833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21b80, cid 7, qid 0 00:22:12.550 [2024-11-20 16:23:43.698904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.550 [2024-11-20 16:23:43.698909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.550 [2024-11-20 16:23:43.698912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.550 [2024-11-20 16:23:43.698915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21b80) on tqpair=0x9bf690 00:22:12.550 [2024-11-20 16:23:43.698942] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:12.550 [2024-11-20 16:23:43.698951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21100) on tqpair=0x9bf690 00:22:12.550 [2024-11-20 16:23:43.698957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.550 [2024-11-20 16:23:43.698961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21280) on tqpair=0x9bf690 00:22:12.550 [2024-11-20 16:23:43.698965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.550 [2024-11-20 16:23:43.698969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21400) on tqpair=0x9bf690 00:22:12.550 [2024-11-20 16:23:43.698973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.550 [2024-11-20 16:23:43.698977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21580) on tqpair=0x9bf690 00:22:12.550 [2024-11-20 16:23:43.698981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.550 [2024-11-20 16:23:43.698987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.550 [2024-11-20 16:23:43.698991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.550 [2024-11-20 16:23:43.698994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bf690) 00:22:12.551 [2024-11-20 16:23:43.698999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.551 [2024-11-20 16:23:43.699010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21580, cid 3, qid 0 00:22:12.551 [2024-11-20 16:23:43.699102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.551 [2024-11-20 16:23:43.699107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.551 [2024-11-20 16:23:43.699110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.551 [2024-11-20 16:23:43.699113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21580) on tqpair=0x9bf690 00:22:12.551 [2024-11-20 16:23:43.699119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.551 [2024-11-20 16:23:43.699123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.551 [2024-11-20 16:23:43.699125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bf690) 00:22:12.551 [2024-11-20 16:23:43.699131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.551 [2024-11-20 16:23:43.699146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21580, cid 3, qid 0 00:22:12.551 [2024-11-20 16:23:43.703211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.551 [2024-11-20 16:23:43.703219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.551 [2024-11-20 16:23:43.703222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.551 [2024-11-20 16:23:43.703225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21580) on tqpair=0x9bf690 00:22:12.551 [2024-11-20 16:23:43.703229] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:12.551 [2024-11-20 16:23:43.703233] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:12.551 [2024-11-20 16:23:43.703242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.551 [2024-11-20 16:23:43.703246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.551 [2024-11-20 16:23:43.703249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bf690) 00:22:12.551 [2024-11-20 16:23:43.703255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.551 [2024-11-20 16:23:43.703266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21580, cid 3, qid 0 00:22:12.551 [2024-11-20 16:23:43.703451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.551 [2024-11-20 16:23:43.703456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.551 [2024-11-20 16:23:43.703459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.551 [2024-11-20 16:23:43.703462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa21580) on tqpair=0x9bf690 00:22:12.551 [2024-11-20 16:23:43.703469] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 0 milliseconds 00:22:12.551 0% 00:22:12.551 Data Units Read: 0 00:22:12.551 Data Units Written: 0 00:22:12.551 Host Read Commands: 0 00:22:12.551 Host Write Commands: 0 00:22:12.551 Controller Busy Time: 0 minutes 00:22:12.551 Power Cycles: 0 00:22:12.551 Power On Hours: 0 hours 00:22:12.551 Unsafe Shutdowns: 0 00:22:12.551 Unrecoverable Media Errors: 0 00:22:12.551 Lifetime Error Log Entries: 0 00:22:12.551 Warning Temperature Time: 0 minutes 00:22:12.551 Critical Temperature Time: 0 minutes 00:22:12.551 00:22:12.551 Number of Queues 00:22:12.551 ================ 00:22:12.551 Number of I/O Submission Queues: 127 00:22:12.551 Number of I/O Completion Queues: 127 00:22:12.551 00:22:12.551 Active Namespaces 00:22:12.551 ================= 00:22:12.551 Namespace ID:1 00:22:12.551 Error Recovery Timeout: Unlimited 00:22:12.551 Command Set Identifier: NVM (00h) 00:22:12.551 Deallocate: Supported 00:22:12.551 Deallocated/Unwritten Error: Not Supported 00:22:12.551 Deallocated Read Value: Unknown 00:22:12.551 Deallocate in Write Zeroes: Not Supported 00:22:12.551 Deallocated Guard Field: 0xFFFF 00:22:12.551 Flush: Supported 00:22:12.551 Reservation: Supported 00:22:12.551 Namespace Sharing Capabilities: Multiple Controllers 00:22:12.551 Size (in LBAs): 131072 (0GiB) 00:22:12.551 Capacity (in LBAs): 131072 (0GiB) 00:22:12.551 Utilization (in LBAs): 131072 (0GiB) 00:22:12.551 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:12.551 EUI64: ABCDEF0123456789 00:22:12.551 UUID: 80fbc178-7ba2-4089-bc1a-094e7d0905c4 00:22:12.551 Thin Provisioning: Not Supported 00:22:12.551 Per-NS Atomic Units: Yes 00:22:12.551 Atomic Boundary Size (Normal): 0 00:22:12.551 Atomic Boundary Size (PFail): 0 00:22:12.551 Atomic Boundary Offset: 0 00:22:12.551 Maximum Single Source Range Length: 65535 00:22:12.551 Maximum Copy Length: 65535 00:22:12.551 Maximum Source Range Count: 1 00:22:12.551 NGUID/EUI64 Never Reused: No 00:22:12.551 Namespace Write Protected: No 00:22:12.551 Number of LBA Formats: 1 00:22:12.551 Current LBA Format: LBA Format #00 00:22:12.551 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:12.551 00:22:12.551 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:12.551 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:12.551 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.551 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.551 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.551 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:12.551 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:12.551 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.551 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:12.551 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.551 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:12.551 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.551 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.551 rmmod nvme_tcp 00:22:12.551 rmmod nvme_fabrics 00:22:12.551 rmmod nvme_keyring 00:22:12.810 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:12.810 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:12.810 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:12.810 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2001768 ']' 00:22:12.810 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2001768 00:22:12.810 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2001768 ']' 00:22:12.810 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2001768 00:22:12.810 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:12.810 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.810 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2001768 00:22:12.811 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.811 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.811 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2001768' 00:22:12.811 killing process with pid 2001768 00:22:12.811 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2001768 00:22:12.811 16:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2001768 00:22:12.811 16:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:12.811 16:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:12.811 16:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:12.811 16:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:12.811 16:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:12.811 16:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:12.811 16:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:12.811 16:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:12.811 16:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:12.811 16:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.811 16:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.811 16:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:15.345 00:22:15.345 real 0m9.881s 00:22:15.345 user 0m7.881s 00:22:15.345 sys 0m4.861s 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:15.345 ************************************ 00:22:15.345 END TEST nvmf_identify 00:22:15.345 ************************************ 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.345 ************************************ 00:22:15.345 START TEST nvmf_perf 00:22:15.345 ************************************ 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:15.345 * Looking for test storage... 00:22:15.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:15.345 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:15.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.346 --rc genhtml_branch_coverage=1 00:22:15.346 --rc genhtml_function_coverage=1 00:22:15.346 --rc genhtml_legend=1 00:22:15.346 --rc geninfo_all_blocks=1 00:22:15.346 --rc geninfo_unexecuted_blocks=1 00:22:15.346 00:22:15.346 ' 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:15.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.346 --rc genhtml_branch_coverage=1 00:22:15.346 --rc genhtml_function_coverage=1 00:22:15.346 --rc genhtml_legend=1 00:22:15.346 --rc geninfo_all_blocks=1 00:22:15.346 --rc geninfo_unexecuted_blocks=1 00:22:15.346 00:22:15.346 ' 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:15.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.346 --rc genhtml_branch_coverage=1 00:22:15.346 --rc genhtml_function_coverage=1 00:22:15.346 --rc genhtml_legend=1 00:22:15.346 --rc geninfo_all_blocks=1 00:22:15.346 --rc geninfo_unexecuted_blocks=1 00:22:15.346 00:22:15.346 ' 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:15.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.346 --rc genhtml_branch_coverage=1 00:22:15.346 --rc genhtml_function_coverage=1 00:22:15.346 --rc genhtml_legend=1 00:22:15.346 --rc geninfo_all_blocks=1 00:22:15.346 --rc geninfo_unexecuted_blocks=1 00:22:15.346 00:22:15.346 ' 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:15.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:15.346 16:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.924 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:21.925 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:21.925 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:21.925 Found net devices under 0000:86:00.0: cvl_0_0 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:21.925 Found net devices under 0000:86:00.1: cvl_0_1 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:21.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:22:21.925 00:22:21.925 --- 10.0.0.2 ping statistics --- 00:22:21.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.925 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:22:21.925 00:22:21.925 --- 10.0.0.1 ping statistics --- 00:22:21.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.925 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2005522 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2005522 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2005522 ']' 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.925 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.925 [2024-11-20 16:23:52.387098] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:22:21.925 [2024-11-20 16:23:52.387145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.925 [2024-11-20 16:23:52.466442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.925 [2024-11-20 16:23:52.509539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.925 [2024-11-20 16:23:52.509574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.925 [2024-11-20 16:23:52.509582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.925 [2024-11-20 16:23:52.509588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.926 [2024-11-20 16:23:52.509593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.926 [2024-11-20 16:23:52.511157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.926 [2024-11-20 16:23:52.511268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.926 [2024-11-20 16:23:52.511375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.926 [2024-11-20 16:23:52.511376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.926 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.926 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:21.926 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:21.926 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:21.926 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.926 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.926 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:21.926 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:24.458 16:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:24.458 16:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:24.717 16:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:24.717 16:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:24.976 16:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:24.976 16:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:24.976 16:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:24.976 16:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:24.976 16:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:25.234 [2024-11-20 16:23:56.278151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.234 16:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:25.493 16:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:25.493 16:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:25.751 16:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:25.751 16:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:25.751 16:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:26.010 [2024-11-20 16:23:57.086391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.010 16:23:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:26.269 16:23:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:26.269 16:23:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:26.269 16:23:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:26.269 16:23:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:27.641 Initializing NVMe Controllers 00:22:27.641 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:27.641 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:27.641 Initialization complete. Launching workers. 00:22:27.641 ======================================================== 00:22:27.641 Latency(us) 00:22:27.641 Device Information : IOPS MiB/s Average min max 00:22:27.641 PCIE (0000:5e:00.0) NSID 1 from core 0: 98454.75 384.59 324.28 34.10 4740.29 00:22:27.641 ======================================================== 00:22:27.641 Total : 98454.75 384.59 324.28 34.10 4740.29 00:22:27.641 00:22:27.641 16:23:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:28.573 Initializing NVMe Controllers 00:22:28.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:28.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:28.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:28.573 Initialization complete. Launching workers. 00:22:28.573 ======================================================== 00:22:28.573 Latency(us) 00:22:28.573 Device Information : IOPS MiB/s Average min max 00:22:28.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 170.40 0.67 5863.96 105.32 44694.76 00:22:28.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 39.86 0.16 25866.75 7962.77 47901.42 00:22:28.573 ======================================================== 00:22:28.573 Total : 210.26 0.82 9655.96 105.32 47901.42 00:22:28.573 00:22:28.831 16:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:30.205 Initializing NVMe Controllers 00:22:30.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:30.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:30.205 Initialization complete. Launching workers. 00:22:30.205 ======================================================== 00:22:30.205 Latency(us) 00:22:30.205 Device Information : IOPS MiB/s Average min max 00:22:30.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11229.19 43.86 2848.76 420.42 6857.79 00:22:30.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3833.36 14.97 8407.51 4818.25 16374.56 00:22:30.205 ======================================================== 00:22:30.205 Total : 15062.55 58.84 4263.44 420.42 16374.56 00:22:30.205 00:22:30.205 16:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:30.205 16:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:30.205 16:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:32.810 Initializing NVMe Controllers 00:22:32.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.810 Controller IO queue size 128, less than required. 00:22:32.810 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.810 Controller IO queue size 128, less than required. 00:22:32.810 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:32.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:32.810 Initialization complete. Launching workers. 00:22:32.810 ======================================================== 00:22:32.810 Latency(us) 00:22:32.810 Device Information : IOPS MiB/s Average min max 00:22:32.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1787.25 446.81 72670.29 48493.96 134319.63 00:22:32.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.25 151.56 221530.12 103595.32 325705.19 00:22:32.810 ======================================================== 00:22:32.810 Total : 2393.49 598.37 110374.81 48493.96 325705.19 00:22:32.810 00:22:32.810 16:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:32.810 No valid NVMe controllers or AIO or URING devices found 00:22:32.810 Initializing NVMe Controllers 00:22:32.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.810 Controller IO queue size 128, less than required. 00:22:32.810 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.810 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:32.810 Controller IO queue size 128, less than required. 00:22:32.810 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.810 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:32.810 WARNING: Some requested NVMe devices were skipped 00:22:32.810 16:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:35.377 Initializing NVMe Controllers 00:22:35.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:35.377 Controller IO queue size 128, less than required. 00:22:35.377 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.377 Controller IO queue size 128, less than required. 00:22:35.377 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:35.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:35.377 Initialization complete. Launching workers. 00:22:35.377 00:22:35.377 ==================== 00:22:35.377 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:35.377 TCP transport: 00:22:35.377 polls: 15331 00:22:35.377 idle_polls: 11951 00:22:35.377 sock_completions: 3380 00:22:35.377 nvme_completions: 6509 00:22:35.377 submitted_requests: 9850 00:22:35.377 queued_requests: 1 00:22:35.377 00:22:35.377 ==================== 00:22:35.377 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:35.377 TCP transport: 00:22:35.377 polls: 11545 00:22:35.377 idle_polls: 7874 00:22:35.377 sock_completions: 3671 00:22:35.377 nvme_completions: 6463 00:22:35.377 submitted_requests: 9654 00:22:35.377 queued_requests: 1 00:22:35.377 ======================================================== 00:22:35.377 Latency(us) 00:22:35.377 Device Information : IOPS MiB/s Average min max 00:22:35.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1623.56 405.89 80666.92 52912.62 133311.46 00:22:35.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1612.09 403.02 80304.45 45796.94 131533.16 00:22:35.377 ======================================================== 00:22:35.377 Total : 3235.65 808.91 80486.33 45796.94 133311.46 00:22:35.377 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.377 rmmod nvme_tcp 00:22:35.377 rmmod nvme_fabrics 00:22:35.377 rmmod nvme_keyring 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2005522 ']' 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2005522 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2005522 ']' 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2005522 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2005522 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2005522' 00:22:35.377 killing process with pid 2005522 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2005522 00:22:35.377 16:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2005522 00:22:37.908 16:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:37.908 16:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:37.908 16:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:37.908 16:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:37.908 16:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:37.908 16:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:37.908 16:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:37.908 16:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.908 16:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:37.908 16:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.908 16:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.908 16:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:39.813 00:22:39.813 real 0m24.419s 00:22:39.813 user 1m3.638s 00:22:39.813 sys 0m8.312s 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:39.813 ************************************ 00:22:39.813 END TEST nvmf_perf 00:22:39.813 ************************************ 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.813 ************************************ 00:22:39.813 START TEST nvmf_fio_host 00:22:39.813 ************************************ 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:39.813 * Looking for test storage... 00:22:39.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:39.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.813 --rc genhtml_branch_coverage=1 00:22:39.813 --rc genhtml_function_coverage=1 00:22:39.813 --rc genhtml_legend=1 00:22:39.813 --rc geninfo_all_blocks=1 00:22:39.813 --rc geninfo_unexecuted_blocks=1 00:22:39.813 00:22:39.813 ' 00:22:39.813 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:39.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.814 --rc genhtml_branch_coverage=1 00:22:39.814 --rc genhtml_function_coverage=1 00:22:39.814 --rc genhtml_legend=1 00:22:39.814 --rc geninfo_all_blocks=1 00:22:39.814 --rc geninfo_unexecuted_blocks=1 00:22:39.814 00:22:39.814 ' 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:39.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.814 --rc genhtml_branch_coverage=1 00:22:39.814 --rc genhtml_function_coverage=1 00:22:39.814 --rc genhtml_legend=1 00:22:39.814 --rc geninfo_all_blocks=1 00:22:39.814 --rc geninfo_unexecuted_blocks=1 00:22:39.814 00:22:39.814 ' 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:39.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.814 --rc genhtml_branch_coverage=1 00:22:39.814 --rc genhtml_function_coverage=1 00:22:39.814 --rc genhtml_legend=1 00:22:39.814 --rc geninfo_all_blocks=1 00:22:39.814 --rc geninfo_unexecuted_blocks=1 00:22:39.814 00:22:39.814 ' 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:39.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:39.814 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:39.815 16:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:46.386 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:46.386 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:46.386 Found net devices under 0000:86:00.0: cvl_0_0 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:46.386 Found net devices under 0000:86:00.1: cvl_0_1 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:46.386 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:46.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:22:46.387 00:22:46.387 --- 10.0.0.2 ping statistics --- 00:22:46.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.387 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:22:46.387 00:22:46.387 --- 10.0.0.1 ping statistics --- 00:22:46.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.387 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2012164 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2012164 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2012164 ']' 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.387 16:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.387 [2024-11-20 16:24:16.884772] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:22:46.387 [2024-11-20 16:24:16.884816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.387 [2024-11-20 16:24:16.963883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:46.387 [2024-11-20 16:24:17.005580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.387 [2024-11-20 16:24:17.005621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.387 [2024-11-20 16:24:17.005628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.387 [2024-11-20 16:24:17.005634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.387 [2024-11-20 16:24:17.005640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.387 [2024-11-20 16:24:17.008222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.387 [2024-11-20 16:24:17.008247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.387 [2024-11-20 16:24:17.008360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.387 [2024-11-20 16:24:17.008361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:46.645 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.645 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:46.645 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:46.903 [2024-11-20 16:24:17.900080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.903 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:46.903 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.903 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.903 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:47.161 Malloc1 00:22:47.161 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:47.419 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:47.419 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:47.677 [2024-11-20 16:24:18.738905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.677 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:47.936 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:47.936 16:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:47.936 16:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:47.936 16:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:47.936 16:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:48.197 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:48.197 fio-3.35 00:22:48.197 Starting 1 thread 00:22:50.724 00:22:50.724 test: (groupid=0, jobs=1): err= 0: pid=2012707: Wed Nov 20 16:24:21 2024 00:22:50.724 read: IOPS=11.8k, BW=46.1MiB/s (48.4MB/s)(92.5MiB/2005msec) 00:22:50.724 slat (nsec): min=1528, max=239725, avg=1706.53, stdev=2191.90 00:22:50.724 clat (usec): min=3132, max=10588, avg=5972.46, stdev=468.63 00:22:50.724 lat (usec): min=3165, max=10589, avg=5974.16, stdev=468.54 00:22:50.724 clat percentiles (usec): 00:22:50.724 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:22:50.724 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6063], 00:22:50.724 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:22:50.724 | 99.00th=[ 7046], 99.50th=[ 7242], 99.90th=[ 8848], 99.95th=[ 9765], 00:22:50.724 | 99.99th=[10552] 00:22:50.724 bw ( KiB/s): min=46184, max=48048, per=99.98%, avg=47234.00, stdev=782.98, samples=4 00:22:50.724 iops : min=11546, max=12012, avg=11808.50, stdev=195.74, samples=4 00:22:50.724 write: IOPS=11.8k, BW=45.9MiB/s (48.1MB/s)(92.1MiB/2005msec); 0 zone resets 00:22:50.724 slat (nsec): min=1570, max=223609, avg=1771.62, stdev=1630.28 00:22:50.724 clat (usec): min=2439, max=9292, avg=4844.97, stdev=387.29 00:22:50.724 lat (usec): min=2455, max=9293, avg=4846.74, stdev=387.27 00:22:50.724 clat percentiles (usec): 00:22:50.724 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 4555], 00:22:50.724 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:22:50.724 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:22:50.724 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 7701], 99.95th=[ 8586], 00:22:50.724 | 99.99th=[ 9110] 00:22:50.724 bw ( KiB/s): min=46728, max=47552, per=99.98%, avg=47010.00, stdev=369.29, samples=4 00:22:50.724 iops : min=11682, max=11888, avg=11752.50, stdev=92.32, samples=4 00:22:50.724 lat (msec) : 4=0.67%, 10=99.32%, 20=0.01% 00:22:50.724 cpu : usr=74.00%, sys=25.00%, ctx=118, majf=0, minf=3 00:22:50.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:50.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:50.724 issued rwts: total=23681,23568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:50.724 00:22:50.724 Run status group 0 (all jobs): 00:22:50.724 READ: bw=46.1MiB/s (48.4MB/s), 46.1MiB/s-46.1MiB/s (48.4MB/s-48.4MB/s), io=92.5MiB (97.0MB), run=2005-2005msec 00:22:50.724 WRITE: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=92.1MiB (96.5MB), run=2005-2005msec 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:50.724 16:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:50.724 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:50.724 fio-3.35 00:22:50.724 Starting 1 thread 00:22:53.256 00:22:53.256 test: (groupid=0, jobs=1): err= 0: pid=2013199: Wed Nov 20 16:24:24 2024 00:22:53.256 read: IOPS=11.0k, BW=172MiB/s (180MB/s)(344MiB/2004msec) 00:22:53.256 slat (nsec): min=2462, max=92829, avg=2858.24, stdev=1321.70 00:22:53.256 clat (usec): min=1067, max=13926, avg=6734.88, stdev=1596.95 00:22:53.256 lat (usec): min=1070, max=13935, avg=6737.74, stdev=1597.09 00:22:53.256 clat percentiles (usec): 00:22:53.256 | 1.00th=[ 3687], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5342], 00:22:53.256 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7111], 00:22:53.256 | 70.00th=[ 7504], 80.00th=[ 8029], 90.00th=[ 8717], 95.00th=[ 9503], 00:22:53.256 | 99.00th=[11207], 99.50th=[11731], 99.90th=[12911], 99.95th=[13304], 00:22:53.256 | 99.99th=[13435] 00:22:53.256 bw ( KiB/s): min=83936, max=95872, per=50.32%, avg=88512.00, stdev=5141.49, samples=4 00:22:53.256 iops : min= 5246, max= 5992, avg=5532.00, stdev=321.34, samples=4 00:22:53.256 write: IOPS=6382, BW=99.7MiB/s (105MB/s)(181MiB/1815msec); 0 zone resets 00:22:53.256 slat (usec): min=28, max=379, avg=31.66, stdev= 7.66 00:22:53.256 clat (usec): min=3435, max=15017, avg=8520.82, stdev=1490.85 00:22:53.256 lat (usec): min=3465, max=15128, avg=8552.47, stdev=1492.42 00:22:53.256 clat percentiles (usec): 00:22:53.256 | 1.00th=[ 5604], 5.00th=[ 6456], 10.00th=[ 6783], 20.00th=[ 7308], 00:22:53.256 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:22:53.256 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10683], 95.00th=[11338], 00:22:53.256 | 99.00th=[12256], 99.50th=[12649], 99.90th=[14746], 99.95th=[14877], 00:22:53.256 | 99.99th=[15008] 00:22:53.256 bw ( KiB/s): min=87840, max=99712, per=90.25%, avg=92160.00, stdev=5194.33, samples=4 00:22:53.256 iops : min= 5490, max= 6232, avg=5760.00, stdev=324.65, samples=4 00:22:53.256 lat (msec) : 2=0.08%, 4=1.60%, 10=90.47%, 20=7.86% 00:22:53.256 cpu : usr=86.47%, sys=12.68%, ctx=49, majf=0, minf=3 00:22:53.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:53.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:53.256 issued rwts: total=22030,11584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:53.256 00:22:53.256 Run status group 0 (all jobs): 00:22:53.256 READ: bw=172MiB/s (180MB/s), 172MiB/s-172MiB/s (180MB/s-180MB/s), io=344MiB (361MB), run=2004-2004msec 00:22:53.256 WRITE: bw=99.7MiB/s (105MB/s), 99.7MiB/s-99.7MiB/s (105MB/s-105MB/s), io=181MiB (190MB), run=1815-1815msec 00:22:53.256 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.256 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:53.256 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:53.256 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:53.256 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:53.256 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:53.256 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:53.256 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:53.256 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:53.256 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:53.256 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:53.256 rmmod nvme_tcp 00:22:53.256 rmmod nvme_fabrics 00:22:53.256 rmmod nvme_keyring 00:22:53.514 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:53.514 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:53.514 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:53.514 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2012164 ']' 00:22:53.514 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2012164 00:22:53.514 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2012164 ']' 00:22:53.515 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2012164 00:22:53.515 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:53.515 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.515 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2012164 00:22:53.515 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:53.515 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:53.515 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2012164' 00:22:53.515 killing process with pid 2012164 00:22:53.515 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2012164 00:22:53.515 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2012164 00:22:53.773 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:53.773 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:53.773 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:53.773 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:53.773 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:53.773 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:53.773 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:53.773 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:53.773 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:53.773 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.773 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.773 16:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.680 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:55.680 00:22:55.680 real 0m16.158s 00:22:55.680 user 0m48.233s 00:22:55.680 sys 0m6.357s 00:22:55.680 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:55.680 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.680 ************************************ 00:22:55.680 END TEST nvmf_fio_host 00:22:55.680 ************************************ 00:22:55.680 16:24:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:55.680 16:24:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:55.680 16:24:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.680 16:24:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.680 ************************************ 00:22:55.680 START TEST nvmf_failover 00:22:55.680 ************************************ 00:22:55.680 16:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:55.940 * Looking for test storage... 00:22:55.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:55.940 16:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:55.940 16:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:22:55.940 16:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.940 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:55.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.941 --rc genhtml_branch_coverage=1 00:22:55.941 --rc genhtml_function_coverage=1 00:22:55.941 --rc genhtml_legend=1 00:22:55.941 --rc geninfo_all_blocks=1 00:22:55.941 --rc geninfo_unexecuted_blocks=1 00:22:55.941 00:22:55.941 ' 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:55.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.941 --rc genhtml_branch_coverage=1 00:22:55.941 --rc genhtml_function_coverage=1 00:22:55.941 --rc genhtml_legend=1 00:22:55.941 --rc geninfo_all_blocks=1 00:22:55.941 --rc geninfo_unexecuted_blocks=1 00:22:55.941 00:22:55.941 ' 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:55.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.941 --rc genhtml_branch_coverage=1 00:22:55.941 --rc genhtml_function_coverage=1 00:22:55.941 --rc genhtml_legend=1 00:22:55.941 --rc geninfo_all_blocks=1 00:22:55.941 --rc geninfo_unexecuted_blocks=1 00:22:55.941 00:22:55.941 ' 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:55.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.941 --rc genhtml_branch_coverage=1 00:22:55.941 --rc genhtml_function_coverage=1 00:22:55.941 --rc genhtml_legend=1 00:22:55.941 --rc geninfo_all_blocks=1 00:22:55.941 --rc geninfo_unexecuted_blocks=1 00:22:55.941 00:22:55.941 ' 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:55.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:55.941 16:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.513 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:02.514 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:02.514 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:02.514 Found net devices under 0000:86:00.0: cvl_0_0 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:02.514 Found net devices under 0000:86:00.1: cvl_0_1 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:02.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:23:02.514 00:23:02.514 --- 10.0.0.2 ping statistics --- 00:23:02.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.514 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:23:02.514 00:23:02.514 --- 10.0.0.1 ping statistics --- 00:23:02.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.514 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:02.514 16:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:02.514 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:02.514 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.514 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.514 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:02.514 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2017095 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2017095 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2017095 ']' 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:02.515 [2024-11-20 16:24:33.088116] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:23:02.515 [2024-11-20 16:24:33.088160] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.515 [2024-11-20 16:24:33.163501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:02.515 [2024-11-20 16:24:33.205168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.515 [2024-11-20 16:24:33.205205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.515 [2024-11-20 16:24:33.205213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.515 [2024-11-20 16:24:33.205218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.515 [2024-11-20 16:24:33.205239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.515 [2024-11-20 16:24:33.206565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.515 [2024-11-20 16:24:33.206602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.515 [2024-11-20 16:24:33.206602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:02.515 [2024-11-20 16:24:33.516686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.515 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:02.774 Malloc0 00:23:02.774 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.774 16:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:03.033 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.291 [2024-11-20 16:24:34.332191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.291 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:03.550 [2024-11-20 16:24:34.528689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:03.551 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:03.551 [2024-11-20 16:24:34.713255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:03.551 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:03.551 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2017356 00:23:03.551 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.551 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2017356 /var/tmp/bdevperf.sock 00:23:03.551 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2017356 ']' 00:23:03.551 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.551 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.551 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.551 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.551 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:03.809 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.810 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:03.810 16:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:04.068 NVMe0n1 00:23:04.068 16:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:04.636 00:23:04.636 16:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2017576 00:23:04.636 16:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:04.636 16:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:05.573 16:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:05.833 [2024-11-20 16:24:36.876642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.876994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.877001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.877007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.877013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.877019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.877025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.877030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.877036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 [2024-11-20 16:24:36.877042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2d0 is same with the state(6) to be set 00:23:05.833 16:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:09.126 16:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:09.126 00:23:09.126 16:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:09.384 16:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:12.672 16:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.672 [2024-11-20 16:24:43.724321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.672 16:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:13.609 16:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:13.868 16:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2017576 00:23:20.445 { 00:23:20.445 "results": [ 00:23:20.445 { 00:23:20.445 "job": "NVMe0n1", 00:23:20.445 "core_mask": "0x1", 00:23:20.445 "workload": "verify", 00:23:20.445 "status": "finished", 00:23:20.445 "verify_range": { 00:23:20.445 "start": 0, 00:23:20.445 "length": 16384 00:23:20.445 }, 00:23:20.445 "queue_depth": 128, 00:23:20.445 "io_size": 4096, 00:23:20.445 "runtime": 15.004739, 00:23:20.445 "iops": 11023.384012211076, 00:23:20.445 "mibps": 43.060093797699516, 00:23:20.445 "io_failed": 20405, 00:23:20.445 "io_timeout": 0, 00:23:20.445 "avg_latency_us": 10314.673825802774, 00:23:20.445 "min_latency_us": 415.45142857142855, 00:23:20.445 "max_latency_us": 16352.792380952382 00:23:20.445 } 00:23:20.445 ], 00:23:20.445 "core_count": 1 00:23:20.445 } 00:23:20.445 16:24:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2017356 00:23:20.445 16:24:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2017356 ']' 00:23:20.445 16:24:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2017356 00:23:20.445 16:24:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:20.445 16:24:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.445 16:24:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2017356 00:23:20.445 16:24:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.445 16:24:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.445 16:24:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2017356' 00:23:20.445 killing process with pid 2017356 00:23:20.445 16:24:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2017356 00:23:20.445 16:24:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2017356 00:23:20.445 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:20.445 [2024-11-20 16:24:34.774078] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:23:20.445 [2024-11-20 16:24:34.774133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017356 ] 00:23:20.445 [2024-11-20 16:24:34.851083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.445 [2024-11-20 16:24:34.892077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.445 Running I/O for 15 seconds... 00:23:20.445 11475.00 IOPS, 44.82 MiB/s [2024-11-20T15:24:51.679Z] [2024-11-20 16:24:36.877645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-11-20 16:24:36.877678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.445 [2024-11-20 16:24:36.877693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-11-20 16:24:36.877701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.445 [2024-11-20 16:24:36.877710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-11-20 16:24:36.877717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.445 [2024-11-20 16:24:36.877726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-11-20 16:24:36.877733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.445 [2024-11-20 16:24:36.877742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-11-20 16:24:36.877749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.445 [2024-11-20 16:24:36.877757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-11-20 16:24:36.877763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.445 [2024-11-20 16:24:36.877771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-11-20 16:24:36.877778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.445 [2024-11-20 16:24:36.877786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-11-20 16:24:36.877792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.445 [2024-11-20 16:24:36.877800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-11-20 16:24:36.877806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.445 [2024-11-20 16:24:36.877814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-11-20 16:24:36.877821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.445 [2024-11-20 16:24:36.877828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-11-20 16:24:36.877835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.445 [2024-11-20 16:24:36.877848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-11-20 16:24:36.877854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.445 [2024-11-20 16:24:36.877862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-11-20 16:24:36.877869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.445 [2024-11-20 16:24:36.877877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.877883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.877891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.877897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.877905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.877912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.877920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.877927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.877936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.877942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.877950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.877957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.877965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.877972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.877980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.877986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.877994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-11-20 16:24:36.878250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.446 [2024-11-20 16:24:36.878259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-11-20 16:24:36.878660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.447 [2024-11-20 16:24:36.878667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.878988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.878996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.879002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.879010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.879016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.879024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.879031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.879040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.879046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.879054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-11-20 16:24:36.879060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.879068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.448 [2024-11-20 16:24:36.879075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.448 [2024-11-20 16:24:36.879083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.448 [2024-11-20 16:24:36.879089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.449 [2024-11-20 16:24:36.879513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.449 [2024-11-20 16:24:36.879521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.450 [2024-11-20 16:24:36.879527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:36.879547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.450 [2024-11-20 16:24:36.879555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101288 len:8 PRP1 0x0 PRP2 0x0 00:23:20.450 [2024-11-20 16:24:36.879562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:36.879571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.450 [2024-11-20 16:24:36.879576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.450 [2024-11-20 16:24:36.879584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101296 len:8 PRP1 0x0 PRP2 0x0 00:23:20.450 [2024-11-20 16:24:36.879590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:36.879633] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:20.450 [2024-11-20 16:24:36.879654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.450 [2024-11-20 16:24:36.879662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:36.879669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.450 [2024-11-20 16:24:36.879676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:36.879683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.450 [2024-11-20 16:24:36.879689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:36.879696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.450 [2024-11-20 16:24:36.879702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:36.879708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:20.450 [2024-11-20 16:24:36.882506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:20.450 [2024-11-20 16:24:36.882533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1467340 (9): Bad file descriptor 00:23:20.450 [2024-11-20 16:24:37.025932] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:20.450 10470.00 IOPS, 40.90 MiB/s [2024-11-20T15:24:51.684Z] 10844.67 IOPS, 42.36 MiB/s [2024-11-20T15:24:51.684Z] 11002.00 IOPS, 42.98 MiB/s [2024-11-20T15:24:51.684Z] [2024-11-20 16:24:40.507090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-11-20 16:24:40.507347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.450 [2024-11-20 16:24:40.507361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.450 [2024-11-20 16:24:40.507380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.450 [2024-11-20 16:24:40.507403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-11-20 16:24:40.507413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.450 [2024-11-20 16:24:40.507420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.451 [2024-11-20 16:24:40.507772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.451 [2024-11-20 16:24:40.507778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.507987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.507993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.508001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.508008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.508015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.508021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.508029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.508035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.508043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.452 [2024-11-20 16:24:40.508049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.508072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.452 [2024-11-20 16:24:40.508079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78792 len:8 PRP1 0x0 PRP2 0x0 00:23:20.452 [2024-11-20 16:24:40.508085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.508094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.452 [2024-11-20 16:24:40.508099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.452 [2024-11-20 16:24:40.508105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78800 len:8 PRP1 0x0 PRP2 0x0 00:23:20.452 [2024-11-20 16:24:40.508113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.508120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.452 [2024-11-20 16:24:40.508125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.452 [2024-11-20 16:24:40.508130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78808 len:8 PRP1 0x0 PRP2 0x0 00:23:20.452 [2024-11-20 16:24:40.508136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.508142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.452 [2024-11-20 16:24:40.508147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.452 [2024-11-20 16:24:40.508153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78816 len:8 PRP1 0x0 PRP2 0x0 00:23:20.452 [2024-11-20 16:24:40.508159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.508165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.452 [2024-11-20 16:24:40.508170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.452 [2024-11-20 16:24:40.508175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78824 len:8 PRP1 0x0 PRP2 0x0 00:23:20.452 [2024-11-20 16:24:40.508181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.508188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.452 [2024-11-20 16:24:40.508192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.452 [2024-11-20 16:24:40.508198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78832 len:8 PRP1 0x0 PRP2 0x0 00:23:20.452 [2024-11-20 16:24:40.508209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.452 [2024-11-20 16:24:40.508215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78840 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78848 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78856 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78864 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78872 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78880 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78888 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78896 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78904 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78920 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78952 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78960 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.453 [2024-11-20 16:24:40.508605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.453 [2024-11-20 16:24:40.508610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.453 [2024-11-20 16:24:40.508616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:23:20.453 [2024-11-20 16:24:40.508622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78976 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78984 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78992 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79000 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79008 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79016 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79024 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79032 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.508978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.508983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.508988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:23:20.454 [2024-11-20 16:24:40.508994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.454 [2024-11-20 16:24:40.509000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.454 [2024-11-20 16:24:40.509005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.454 [2024-11-20 16:24:40.509010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79128 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79136 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79144 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79152 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79160 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79168 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79176 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79184 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79192 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79200 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79208 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79216 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79224 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.455 [2024-11-20 16:24:40.509390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79232 len:8 PRP1 0x0 PRP2 0x0 00:23:20.455 [2024-11-20 16:24:40.509396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.455 [2024-11-20 16:24:40.509403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.455 [2024-11-20 16:24:40.509408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.456 [2024-11-20 16:24:40.509413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79240 len:8 PRP1 0x0 PRP2 0x0 00:23:20.456 [2024-11-20 16:24:40.509419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.456 [2024-11-20 16:24:40.509425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.456 [2024-11-20 16:24:40.509430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.456 [2024-11-20 16:24:40.509435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79248 len:8 PRP1 0x0 PRP2 0x0 00:23:20.456 [2024-11-20 16:24:40.509443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.456 [2024-11-20 16:24:40.509450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.456 [2024-11-20 16:24:40.509454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.456 [2024-11-20 16:24:40.509460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79256 len:8 PRP1 0x0 PRP2 0x0 00:23:20.456 [2024-11-20 16:24:40.509466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.456 [2024-11-20 16:24:40.509473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.456 [2024-11-20 16:24:40.509477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.456 [2024-11-20 16:24:40.509483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:23:20.456 [2024-11-20 16:24:40.509489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.456 [2024-11-20 16:24:40.509495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.456 [2024-11-20 16:24:40.509500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.456 [2024-11-20 16:24:40.509506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:23:20.456 [2024-11-20 16:24:40.509512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.456 [2024-11-20 16:24:40.513957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.456 [2024-11-20 16:24:40.513967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.456 [2024-11-20 16:24:40.513974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:23:20.456 [2024-11-20 16:24:40.513981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.456 [2024-11-20 16:24:40.513988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.456 [2024-11-20 16:24:40.513993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.456 [2024-11-20 16:24:40.513998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:23:20.456 [2024-11-20 16:24:40.514004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.456 [2024-11-20 16:24:40.514011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.457 [2024-11-20 16:24:40.514018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.457 [2024-11-20 16:24:40.514023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:23:20.457 [2024-11-20 16:24:40.514029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:40.514036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.457 [2024-11-20 16:24:40.514041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.457 [2024-11-20 16:24:40.514046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:23:20.457 [2024-11-20 16:24:40.514052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:40.514058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.457 [2024-11-20 16:24:40.514063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.457 [2024-11-20 16:24:40.514068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:23:20.457 [2024-11-20 16:24:40.514075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:40.514118] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:20.457 [2024-11-20 16:24:40.514140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.457 [2024-11-20 16:24:40.514148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:40.514155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.457 [2024-11-20 16:24:40.514162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:40.514169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.457 [2024-11-20 16:24:40.514175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:40.514182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.457 [2024-11-20 16:24:40.514188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:40.514194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:20.457 [2024-11-20 16:24:40.514221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1467340 (9): Bad file descriptor 00:23:20.457 [2024-11-20 16:24:40.516990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:20.457 [2024-11-20 16:24:40.672900] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:20.457 10653.40 IOPS, 41.61 MiB/s [2024-11-20T15:24:51.691Z] 10810.83 IOPS, 42.23 MiB/s [2024-11-20T15:24:51.691Z] 10911.57 IOPS, 42.62 MiB/s [2024-11-20T15:24:51.691Z] 10973.62 IOPS, 42.87 MiB/s [2024-11-20T15:24:51.691Z] 11017.67 IOPS, 43.04 MiB/s [2024-11-20T15:24:51.691Z] [2024-11-20 16:24:44.945100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.457 [2024-11-20 16:24:44.945141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.457 [2024-11-20 16:24:44.945170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.457 [2024-11-20 16:24:44.945185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.457 [2024-11-20 16:24:44.945205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.457 [2024-11-20 16:24:44.945220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.457 [2024-11-20 16:24:44.945235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.457 [2024-11-20 16:24:44.945249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.457 [2024-11-20 16:24:44.945265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.457 [2024-11-20 16:24:44.945280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.457 [2024-11-20 16:24:44.945294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.457 [2024-11-20 16:24:44.945309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.457 [2024-11-20 16:24:44.945324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.457 [2024-11-20 16:24:44.945338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.457 [2024-11-20 16:24:44.945352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.457 [2024-11-20 16:24:44.945369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.457 [2024-11-20 16:24:44.945383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-20 16:24:44.945391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.457 [2024-11-20 16:24:44.945397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.458 [2024-11-20 16:24:44.945597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.458 [2024-11-20 16:24:44.945761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.458 [2024-11-20 16:24:44.945767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.945988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.945994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.459 [2024-11-20 16:24:44.946178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.459 [2024-11-20 16:24:44.946211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.459 [2024-11-20 16:24:44.946219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.460 [2024-11-20 16:24:44.946410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.460 [2024-11-20 16:24:44.946706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-20 16:24:44.946713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-20 16:24:44.946970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.946977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496540 is same with the state(6) to be set 00:23:20.461 [2024-11-20 16:24:44.946985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.461 [2024-11-20 16:24:44.946992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.461 [2024-11-20 16:24:44.946998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5968 len:8 PRP1 0x0 PRP2 0x0 00:23:20.461 [2024-11-20 16:24:44.947005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.947048] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:20.461 [2024-11-20 16:24:44.947071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.461 [2024-11-20 16:24:44.947078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.947086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.461 [2024-11-20 16:24:44.947092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.947099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.461 [2024-11-20 16:24:44.947105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.947112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.461 [2024-11-20 16:24:44.947119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.461 [2024-11-20 16:24:44.947125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:20.461 [2024-11-20 16:24:44.949934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:20.461 [2024-11-20 16:24:44.949963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1467340 (9): Bad file descriptor 00:23:20.461 [2024-11-20 16:24:45.067175] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:20.461 10902.90 IOPS, 42.59 MiB/s [2024-11-20T15:24:51.695Z] 10943.09 IOPS, 42.75 MiB/s [2024-11-20T15:24:51.695Z] 10962.92 IOPS, 42.82 MiB/s [2024-11-20T15:24:51.695Z] 10983.54 IOPS, 42.90 MiB/s [2024-11-20T15:24:51.695Z] 11011.86 IOPS, 43.02 MiB/s 00:23:20.461 Latency(us) 00:23:20.461 [2024-11-20T15:24:51.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.461 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:20.461 Verification LBA range: start 0x0 length 0x4000 00:23:20.462 NVMe0n1 : 15.00 11023.38 43.06 1359.90 0.00 10314.67 415.45 16352.79 00:23:20.462 [2024-11-20T15:24:51.696Z] =================================================================================================================== 00:23:20.462 [2024-11-20T15:24:51.696Z] Total : 11023.38 43.06 1359.90 0.00 10314.67 415.45 16352.79 00:23:20.462 Received shutdown signal, test time was about 15.000000 seconds 00:23:20.462 00:23:20.462 Latency(us) 00:23:20.462 [2024-11-20T15:24:51.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.462 [2024-11-20T15:24:51.696Z] =================================================================================================================== 00:23:20.462 [2024-11-20T15:24:51.696Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2020100 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2020100 /var/tmp/bdevperf.sock 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2020100 ']' 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:20.462 [2024-11-20 16:24:51.491163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:20.462 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:20.721 [2024-11-20 16:24:51.675684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:20.721 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:20.980 NVMe0n1 00:23:20.980 16:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:21.239 00:23:21.239 16:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:21.499 00:23:21.499 16:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:21.499 16:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:21.759 16:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:22.017 16:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:25.306 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.306 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:25.307 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2020826 00:23:25.307 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:25.307 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2020826 00:23:26.243 { 00:23:26.243 "results": [ 00:23:26.243 { 00:23:26.243 "job": "NVMe0n1", 00:23:26.243 "core_mask": "0x1", 00:23:26.243 "workload": "verify", 00:23:26.243 "status": "finished", 00:23:26.243 "verify_range": { 00:23:26.243 "start": 0, 00:23:26.243 "length": 16384 00:23:26.243 }, 00:23:26.243 "queue_depth": 128, 00:23:26.243 "io_size": 4096, 00:23:26.243 "runtime": 1.004122, 00:23:26.244 "iops": 11355.193890782195, 00:23:26.244 "mibps": 44.35622613586795, 00:23:26.244 "io_failed": 0, 00:23:26.244 "io_timeout": 0, 00:23:26.244 "avg_latency_us": 11230.938556143034, 00:23:26.244 "min_latency_us": 499.32190476190476, 00:23:26.244 "max_latency_us": 9050.209523809524 00:23:26.244 } 00:23:26.244 ], 00:23:26.244 "core_count": 1 00:23:26.244 } 00:23:26.244 16:24:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.244 [2024-11-20 16:24:51.113953] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:23:26.244 [2024-11-20 16:24:51.114010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020100 ] 00:23:26.244 [2024-11-20 16:24:51.189214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.244 [2024-11-20 16:24:51.227067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.244 [2024-11-20 16:24:53.033318] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:26.244 [2024-11-20 16:24:53.033363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.244 [2024-11-20 16:24:53.033374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.244 [2024-11-20 16:24:53.033383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.244 [2024-11-20 16:24:53.033390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.244 [2024-11-20 16:24:53.033398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.244 [2024-11-20 16:24:53.033405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.244 [2024-11-20 16:24:53.033413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.244 [2024-11-20 16:24:53.033420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.244 [2024-11-20 16:24:53.033428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:26.244 [2024-11-20 16:24:53.033454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:26.244 [2024-11-20 16:24:53.033467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2076340 (9): Bad file descriptor 00:23:26.244 [2024-11-20 16:24:53.040337] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:26.244 Running I/O for 1 seconds... 00:23:26.244 11274.00 IOPS, 44.04 MiB/s 00:23:26.244 Latency(us) 00:23:26.244 [2024-11-20T15:24:57.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.244 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:26.244 Verification LBA range: start 0x0 length 0x4000 00:23:26.244 NVMe0n1 : 1.00 11355.19 44.36 0.00 0.00 11230.94 499.32 9050.21 00:23:26.244 [2024-11-20T15:24:57.478Z] =================================================================================================================== 00:23:26.244 [2024-11-20T15:24:57.478Z] Total : 11355.19 44.36 0.00 0.00 11230.94 499.32 9050.21 00:23:26.244 16:24:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:26.244 16:24:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.502 16:24:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.761 16:24:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.761 16:24:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:27.020 16:24:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:27.020 16:24:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:30.309 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:30.309 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:30.309 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2020100 00:23:30.309 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2020100 ']' 00:23:30.309 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2020100 00:23:30.309 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:30.309 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.309 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2020100 00:23:30.309 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:30.309 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:30.309 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2020100' 00:23:30.309 killing process with pid 2020100 00:23:30.309 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2020100 00:23:30.309 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2020100 00:23:30.567 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:30.568 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.827 rmmod nvme_tcp 00:23:30.827 rmmod nvme_fabrics 00:23:30.827 rmmod nvme_keyring 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2017095 ']' 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2017095 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2017095 ']' 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2017095 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2017095 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2017095' 00:23:30.827 killing process with pid 2017095 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2017095 00:23:30.827 16:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2017095 00:23:31.087 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:31.087 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:31.087 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:31.087 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:31.087 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:31.087 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:31.087 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:31.087 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.087 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:31.087 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.087 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.087 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.016 16:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.016 00:23:33.016 real 0m37.307s 00:23:33.016 user 1m58.026s 00:23:33.016 sys 0m7.882s 00:23:33.016 16:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.016 16:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.016 ************************************ 00:23:33.016 END TEST nvmf_failover 00:23:33.016 ************************************ 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.345 ************************************ 00:23:33.345 START TEST nvmf_host_discovery 00:23:33.345 ************************************ 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:33.345 * Looking for test storage... 00:23:33.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:33.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.345 --rc genhtml_branch_coverage=1 00:23:33.345 --rc genhtml_function_coverage=1 00:23:33.345 --rc genhtml_legend=1 00:23:33.345 --rc geninfo_all_blocks=1 00:23:33.345 --rc geninfo_unexecuted_blocks=1 00:23:33.345 00:23:33.345 ' 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:33.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.345 --rc genhtml_branch_coverage=1 00:23:33.345 --rc genhtml_function_coverage=1 00:23:33.345 --rc genhtml_legend=1 00:23:33.345 --rc geninfo_all_blocks=1 00:23:33.345 --rc geninfo_unexecuted_blocks=1 00:23:33.345 00:23:33.345 ' 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:33.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.345 --rc genhtml_branch_coverage=1 00:23:33.345 --rc genhtml_function_coverage=1 00:23:33.345 --rc genhtml_legend=1 00:23:33.345 --rc geninfo_all_blocks=1 00:23:33.345 --rc geninfo_unexecuted_blocks=1 00:23:33.345 00:23:33.345 ' 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:33.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.345 --rc genhtml_branch_coverage=1 00:23:33.345 --rc genhtml_function_coverage=1 00:23:33.345 --rc genhtml_legend=1 00:23:33.345 --rc geninfo_all_blocks=1 00:23:33.345 --rc geninfo_unexecuted_blocks=1 00:23:33.345 00:23:33.345 ' 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.345 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.346 16:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:39.992 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.992 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:39.993 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:39.993 Found net devices under 0000:86:00.0: cvl_0_0 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:39.993 Found net devices under 0000:86:00.1: cvl_0_1 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:23:39.993 00:23:39.993 --- 10.0.0.2 ping statistics --- 00:23:39.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.993 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:23:39.993 00:23:39.993 --- 10.0.0.1 ping statistics --- 00:23:39.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.993 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2025260 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2025260 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2025260 ']' 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.993 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.994 [2024-11-20 16:25:10.450375] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:23:39.994 [2024-11-20 16:25:10.450423] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.994 [2024-11-20 16:25:10.530516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.994 [2024-11-20 16:25:10.569612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.994 [2024-11-20 16:25:10.569647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.994 [2024-11-20 16:25:10.569655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.994 [2024-11-20 16:25:10.569662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.994 [2024-11-20 16:25:10.569667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.994 [2024-11-20 16:25:10.570221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.994 [2024-11-20 16:25:10.715569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.994 [2024-11-20 16:25:10.727771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.994 null0 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.994 null1 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2025372 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2025372 /tmp/host.sock 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2025372 ']' 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:39.994 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.994 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.994 [2024-11-20 16:25:10.810969] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:23:39.994 [2024-11-20 16:25:10.811015] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2025372 ] 00:23:39.994 [2024-11-20 16:25:10.887465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.994 [2024-11-20 16:25:10.932235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:39.994 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.995 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.253 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.254 [2024-11-20 16:25:11.357380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.254 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.511 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:40.511 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:40.511 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:40.511 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:40.511 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:40.511 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.511 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:40.512 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:41.077 [2024-11-20 16:25:12.093354] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:41.078 [2024-11-20 16:25:12.093375] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:41.078 [2024-11-20 16:25:12.093390] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:41.078 [2024-11-20 16:25:12.179642] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:41.336 [2024-11-20 16:25:12.402756] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:41.336 [2024-11-20 16:25:12.403588] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x732df0:1 started. 00:23:41.336 [2024-11-20 16:25:12.404979] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:41.336 [2024-11-20 16:25:12.404994] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:41.336 [2024-11-20 16:25:12.451909] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x732df0 was disconnected and freed. delete nvme_qpair. 00:23:41.336 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.336 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:41.336 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:41.336 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.336 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.336 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.336 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.336 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.336 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:41.595 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:41.596 [2024-11-20 16:25:12.765110] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x701620:1 started. 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.596 [2024-11-20 16:25:12.772221] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x701620 was disconnected and freed. delete nvme_qpair. 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.596 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.854 [2024-11-20 16:25:12.865378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.854 [2024-11-20 16:25:12.865497] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:41.854 [2024-11-20 16:25:12.865516] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.854 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.855 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.855 [2024-11-20 16:25:12.991895] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:41.855 16:25:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:41.855 16:25:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:41.855 [2024-11-20 16:25:13.057487] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:41.855 [2024-11-20 16:25:13.057521] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:41.855 [2024-11-20 16:25:13.057528] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:41.855 [2024-11-20 16:25:13.057533] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:42.788 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:42.788 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.048 [2024-11-20 16:25:14.121661] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:43.048 [2024-11-20 16:25:14.121682] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.048 [2024-11-20 16:25:14.127208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.048 [2024-11-20 16:25:14.127224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.048 [2024-11-20 16:25:14.127232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.048 [2024-11-20 16:25:14.127241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.048 [2024-11-20 16:25:14.127267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.048 [2024-11-20 16:25:14.127274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.048 [2024-11-20 16:25:14.127281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.048 [2024-11-20 16:25:14.127288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.048 [2024-11-20 16:25:14.127294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x703390 is same with the state(6) to be set 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.048 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.048 [2024-11-20 16:25:14.137220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x703390 (9): Bad file descriptor 00:23:43.048 [2024-11-20 16:25:14.147253] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:43.048 [2024-11-20 16:25:14.147265] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:43.048 [2024-11-20 16:25:14.147269] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.048 [2024-11-20 16:25:14.147274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.048 [2024-11-20 16:25:14.147291] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.048 [2024-11-20 16:25:14.147486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.048 [2024-11-20 16:25:14.147500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x703390 with addr=10.0.0.2, port=4420 00:23:43.048 [2024-11-20 16:25:14.147508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x703390 is same with the state(6) to be set 00:23:43.048 [2024-11-20 16:25:14.147519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x703390 (9): Bad file descriptor 00:23:43.048 [2024-11-20 16:25:14.147529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.049 [2024-11-20 16:25:14.147535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.049 [2024-11-20 16:25:14.147543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.049 [2024-11-20 16:25:14.147548] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.049 [2024-11-20 16:25:14.147554] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.049 [2024-11-20 16:25:14.147559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.049 [2024-11-20 16:25:14.157320] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:43.049 [2024-11-20 16:25:14.157330] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:43.049 [2024-11-20 16:25:14.157334] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.049 [2024-11-20 16:25:14.157338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.049 [2024-11-20 16:25:14.157350] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.049 [2024-11-20 16:25:14.157584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.049 [2024-11-20 16:25:14.157595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x703390 with addr=10.0.0.2, port=4420 00:23:43.049 [2024-11-20 16:25:14.157603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x703390 is same with the state(6) to be set 00:23:43.049 [2024-11-20 16:25:14.157613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x703390 (9): Bad file descriptor 00:23:43.049 [2024-11-20 16:25:14.157622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.049 [2024-11-20 16:25:14.157631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.049 [2024-11-20 16:25:14.157638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.049 [2024-11-20 16:25:14.157643] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.049 [2024-11-20 16:25:14.157648] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.049 [2024-11-20 16:25:14.157651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.049 [2024-11-20 16:25:14.167382] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:43.049 [2024-11-20 16:25:14.167396] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:43.049 [2024-11-20 16:25:14.167400] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.049 [2024-11-20 16:25:14.167404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.049 [2024-11-20 16:25:14.167418] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.049 [2024-11-20 16:25:14.167595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.049 [2024-11-20 16:25:14.167608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x703390 with addr=10.0.0.2, port=4420 00:23:43.049 [2024-11-20 16:25:14.167615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x703390 is same with the state(6) to be set 00:23:43.049 [2024-11-20 16:25:14.167625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x703390 (9): Bad file descriptor 00:23:43.049 [2024-11-20 16:25:14.167634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.049 [2024-11-20 16:25:14.167640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.049 [2024-11-20 16:25:14.167647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.049 [2024-11-20 16:25:14.167652] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.049 [2024-11-20 16:25:14.167656] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.049 [2024-11-20 16:25:14.167660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:43.049 [2024-11-20 16:25:14.177449] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:43.049 [2024-11-20 16:25:14.177461] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:43.049 [2024-11-20 16:25:14.177465] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.049 [2024-11-20 16:25:14.177473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.049 [2024-11-20 16:25:14.177485] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.049 [2024-11-20 16:25:14.177663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.049 [2024-11-20 16:25:14.177681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x703390 with addr=10.0.0.2, port=4420 00:23:43.049 [2024-11-20 16:25:14.177688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x703390 is same with the state(6) to be set 00:23:43.049 [2024-11-20 16:25:14.177698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x703390 (9): Bad file descriptor 00:23:43.049 [2024-11-20 16:25:14.177707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.049 [2024-11-20 16:25:14.177713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.049 [2024-11-20 16:25:14.177719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.049 [2024-11-20 16:25:14.177725] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.049 [2024-11-20 16:25:14.177729] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.049 [2024-11-20 16:25:14.177732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.049 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.049 [2024-11-20 16:25:14.187517] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:43.049 [2024-11-20 16:25:14.187532] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:43.049 [2024-11-20 16:25:14.187536] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.050 [2024-11-20 16:25:14.187540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.050 [2024-11-20 16:25:14.187553] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.050 [2024-11-20 16:25:14.187754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.050 [2024-11-20 16:25:14.187767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x703390 with addr=10.0.0.2, port=4420 00:23:43.050 [2024-11-20 16:25:14.187774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x703390 is same with the state(6) to be set 00:23:43.050 [2024-11-20 16:25:14.187784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x703390 (9): Bad file descriptor 00:23:43.050 [2024-11-20 16:25:14.187793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.050 [2024-11-20 16:25:14.187798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.050 [2024-11-20 16:25:14.187805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.050 [2024-11-20 16:25:14.187813] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.050 [2024-11-20 16:25:14.187818] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.050 [2024-11-20 16:25:14.187821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.050 [2024-11-20 16:25:14.197584] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:43.050 [2024-11-20 16:25:14.197595] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:43.050 [2024-11-20 16:25:14.197598] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.050 [2024-11-20 16:25:14.197602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.050 [2024-11-20 16:25:14.197615] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.050 [2024-11-20 16:25:14.197835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.050 [2024-11-20 16:25:14.197847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x703390 with addr=10.0.0.2, port=4420 00:23:43.050 [2024-11-20 16:25:14.197853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x703390 is same with the state(6) to be set 00:23:43.050 [2024-11-20 16:25:14.197863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x703390 (9): Bad file descriptor 00:23:43.050 [2024-11-20 16:25:14.197873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.050 [2024-11-20 16:25:14.197879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.050 [2024-11-20 16:25:14.197885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.050 [2024-11-20 16:25:14.197890] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.050 [2024-11-20 16:25:14.197895] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.050 [2024-11-20 16:25:14.197898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.050 [2024-11-20 16:25:14.207644] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:43.050 [2024-11-20 16:25:14.207654] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:43.050 [2024-11-20 16:25:14.207657] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.050 [2024-11-20 16:25:14.207661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.050 [2024-11-20 16:25:14.207672] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.050 [2024-11-20 16:25:14.207931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.050 [2024-11-20 16:25:14.207943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x703390 with addr=10.0.0.2, port=4420 00:23:43.050 [2024-11-20 16:25:14.207950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x703390 is same with the state(6) to be set 00:23:43.050 [2024-11-20 16:25:14.207960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x703390 (9): Bad file descriptor 00:23:43.050 [2024-11-20 16:25:14.207975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.050 [2024-11-20 16:25:14.207981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.050 [2024-11-20 16:25:14.207991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.050 [2024-11-20 16:25:14.207996] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.050 [2024-11-20 16:25:14.208000] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.050 [2024-11-20 16:25:14.208004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.050 [2024-11-20 16:25:14.208980] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:43.050 [2024-11-20 16:25:14.208995] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:43.050 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.309 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.679 [2024-11-20 16:25:15.522345] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:44.679 [2024-11-20 16:25:15.522360] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:44.679 [2024-11-20 16:25:15.522372] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:44.679 [2024-11-20 16:25:15.608644] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:44.679 [2024-11-20 16:25:15.876910] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:44.679 [2024-11-20 16:25:15.877502] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x73ec80:1 started. 00:23:44.679 [2024-11-20 16:25:15.879055] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:44.679 [2024-11-20 16:25:15.879079] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:44.679 [2024-11-20 16:25:15.880212] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x73ec80 was disconnected and freed. delete nvme_qpair. 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.679 request: 00:23:44.679 { 00:23:44.679 "name": "nvme", 00:23:44.679 "trtype": "tcp", 00:23:44.679 "traddr": "10.0.0.2", 00:23:44.679 "adrfam": "ipv4", 00:23:44.679 "trsvcid": "8009", 00:23:44.679 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:44.679 "wait_for_attach": true, 00:23:44.679 "method": "bdev_nvme_start_discovery", 00:23:44.679 "req_id": 1 00:23:44.679 } 00:23:44.679 Got JSON-RPC error response 00:23:44.679 response: 00:23:44.679 { 00:23:44.679 "code": -17, 00:23:44.679 "message": "File exists" 00:23:44.679 } 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.679 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.938 16:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.938 request: 00:23:44.938 { 00:23:44.938 "name": "nvme_second", 00:23:44.938 "trtype": "tcp", 00:23:44.938 "traddr": "10.0.0.2", 00:23:44.938 "adrfam": "ipv4", 00:23:44.938 "trsvcid": "8009", 00:23:44.938 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:44.938 "wait_for_attach": true, 00:23:44.938 "method": "bdev_nvme_start_discovery", 00:23:44.938 "req_id": 1 00:23:44.938 } 00:23:44.938 Got JSON-RPC error response 00:23:44.938 response: 00:23:44.938 { 00:23:44.938 "code": -17, 00:23:44.938 "message": "File exists" 00:23:44.938 } 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.938 16:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.313 [2024-11-20 16:25:17.110473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.313 [2024-11-20 16:25:17.110498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73eec0 with addr=10.0.0.2, port=8010 00:23:46.313 [2024-11-20 16:25:17.110510] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:46.313 [2024-11-20 16:25:17.110516] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:46.313 [2024-11-20 16:25:17.110522] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:47.249 [2024-11-20 16:25:18.112946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.249 [2024-11-20 16:25:18.112970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73eec0 with addr=10.0.0.2, port=8010 00:23:47.249 [2024-11-20 16:25:18.112982] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:47.249 [2024-11-20 16:25:18.112987] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:47.249 [2024-11-20 16:25:18.113009] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:48.183 [2024-11-20 16:25:19.115094] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:48.183 request: 00:23:48.183 { 00:23:48.183 "name": "nvme_second", 00:23:48.183 "trtype": "tcp", 00:23:48.183 "traddr": "10.0.0.2", 00:23:48.183 "adrfam": "ipv4", 00:23:48.183 "trsvcid": "8010", 00:23:48.183 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:48.183 "wait_for_attach": false, 00:23:48.183 "attach_timeout_ms": 3000, 00:23:48.183 "method": "bdev_nvme_start_discovery", 00:23:48.183 "req_id": 1 00:23:48.183 } 00:23:48.183 Got JSON-RPC error response 00:23:48.183 response: 00:23:48.183 { 00:23:48.183 "code": -110, 00:23:48.183 "message": "Connection timed out" 00:23:48.183 } 00:23:48.183 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:48.183 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:48.183 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:48.183 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:48.183 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:48.183 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2025372 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.184 rmmod nvme_tcp 00:23:48.184 rmmod nvme_fabrics 00:23:48.184 rmmod nvme_keyring 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2025260 ']' 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2025260 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2025260 ']' 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2025260 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2025260 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2025260' 00:23:48.184 killing process with pid 2025260 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2025260 00:23:48.184 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2025260 00:23:48.443 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:48.443 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:48.443 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:48.443 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:48.443 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:48.443 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:48.443 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:48.443 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:48.443 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:48.443 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.443 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.443 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.350 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:50.350 00:23:50.350 real 0m17.243s 00:23:50.350 user 0m20.636s 00:23:50.350 sys 0m5.763s 00:23:50.350 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:50.350 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.350 ************************************ 00:23:50.350 END TEST nvmf_host_discovery 00:23:50.350 ************************************ 00:23:50.350 16:25:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:50.350 16:25:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:50.350 16:25:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:50.350 16:25:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.610 ************************************ 00:23:50.610 START TEST nvmf_host_multipath_status 00:23:50.610 ************************************ 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:50.610 * Looking for test storage... 00:23:50.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:50.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.610 --rc genhtml_branch_coverage=1 00:23:50.610 --rc genhtml_function_coverage=1 00:23:50.610 --rc genhtml_legend=1 00:23:50.610 --rc geninfo_all_blocks=1 00:23:50.610 --rc geninfo_unexecuted_blocks=1 00:23:50.610 00:23:50.610 ' 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:50.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.610 --rc genhtml_branch_coverage=1 00:23:50.610 --rc genhtml_function_coverage=1 00:23:50.610 --rc genhtml_legend=1 00:23:50.610 --rc geninfo_all_blocks=1 00:23:50.610 --rc geninfo_unexecuted_blocks=1 00:23:50.610 00:23:50.610 ' 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:50.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.610 --rc genhtml_branch_coverage=1 00:23:50.610 --rc genhtml_function_coverage=1 00:23:50.610 --rc genhtml_legend=1 00:23:50.610 --rc geninfo_all_blocks=1 00:23:50.610 --rc geninfo_unexecuted_blocks=1 00:23:50.610 00:23:50.610 ' 00:23:50.610 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:50.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.611 --rc genhtml_branch_coverage=1 00:23:50.611 --rc genhtml_function_coverage=1 00:23:50.611 --rc genhtml_legend=1 00:23:50.611 --rc geninfo_all_blocks=1 00:23:50.611 --rc geninfo_unexecuted_blocks=1 00:23:50.611 00:23:50.611 ' 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:50.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:50.611 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:57.184 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:57.184 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:57.184 Found net devices under 0000:86:00.0: cvl_0_0 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:57.184 Found net devices under 0000:86:00.1: cvl_0_1 00:23:57.184 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:57.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:23:57.185 00:23:57.185 --- 10.0.0.2 ping statistics --- 00:23:57.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.185 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:23:57.185 00:23:57.185 --- 10.0.0.1 ping statistics --- 00:23:57.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.185 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2030361 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2030361 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2030361 ']' 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.185 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.185 [2024-11-20 16:25:27.789014] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:23:57.185 [2024-11-20 16:25:27.789058] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.185 [2024-11-20 16:25:27.868761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:57.185 [2024-11-20 16:25:27.910115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.185 [2024-11-20 16:25:27.910151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.185 [2024-11-20 16:25:27.910159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.185 [2024-11-20 16:25:27.910164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.185 [2024-11-20 16:25:27.910170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.185 [2024-11-20 16:25:27.911381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.185 [2024-11-20 16:25:27.911382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.445 16:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.445 16:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:57.445 16:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.445 16:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:57.445 16:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.445 16:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.445 16:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2030361 00:23:57.445 16:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:57.704 [2024-11-20 16:25:28.827146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.704 16:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:57.963 Malloc0 00:23:57.963 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:58.221 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:58.479 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.479 [2024-11-20 16:25:29.642340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.479 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:58.737 [2024-11-20 16:25:29.838898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.737 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2030836 00:23:58.737 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:58.737 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:58.737 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2030836 /var/tmp/bdevperf.sock 00:23:58.737 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2030836 ']' 00:23:58.737 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.737 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.737 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.737 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.737 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:58.996 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.996 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:58.996 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:59.254 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:59.512 Nvme0n1 00:23:59.770 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:00.028 Nvme0n1 00:24:00.028 16:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:00.028 16:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:02.558 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:02.558 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:02.558 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:02.558 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:03.492 16:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:03.492 16:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:03.492 16:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.492 16:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.750 16:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.750 16:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:03.750 16:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.750 16:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:04.009 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.009 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:04.009 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.009 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.268 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.268 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.268 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.268 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.268 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.268 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.268 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.268 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.526 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.526 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:04.526 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.526 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.784 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.784 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:04.784 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:05.042 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:05.301 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:06.236 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:06.236 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:06.236 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.236 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.494 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.494 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:06.494 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.494 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.494 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.494 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.494 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.494 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.753 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.753 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.753 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.753 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:07.011 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.011 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:07.011 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.011 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:07.269 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.269 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:07.269 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.269 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.527 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.527 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:07.527 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.527 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:07.785 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:08.719 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:08.719 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:08.719 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.719 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.977 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.977 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:08.977 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:08.977 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.235 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.235 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.235 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.235 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.494 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.494 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.494 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.494 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.752 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.752 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.752 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.753 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.011 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.011 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:10.011 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.011 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:10.011 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.011 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:10.011 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:10.268 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:10.526 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:11.459 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:11.459 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:11.459 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.459 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:11.718 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.718 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:11.718 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.718 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:11.976 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.976 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:11.976 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.976 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:12.233 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.234 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:12.234 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.234 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:12.491 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.491 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:12.491 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.491 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:12.492 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.492 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:12.492 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.492 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:12.750 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.750 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:12.750 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:13.007 16:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:13.265 16:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:14.197 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:14.197 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:14.197 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.197 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:14.455 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.455 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:14.455 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:14.455 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.455 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.455 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:14.455 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.455 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:14.712 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.712 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:14.712 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.712 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.970 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.970 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:14.970 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.970 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.227 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:15.227 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:15.227 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.227 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:15.228 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:15.228 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:15.228 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:15.485 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:15.743 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:16.678 16:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:16.678 16:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:16.678 16:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.678 16:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.935 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.935 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.935 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.935 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:17.192 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.192 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:17.192 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.192 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:17.450 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.450 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:17.450 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.450 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:17.450 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.450 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:17.450 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.450 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.708 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.708 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:17.708 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.708 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.965 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.965 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:18.222 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:18.222 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:18.479 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:18.736 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:19.668 16:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:19.668 16:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:19.668 16:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.668 16:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:19.927 16:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.927 16:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:19.927 16:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.927 16:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:19.927 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.927 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:20.184 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.184 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:20.184 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.184 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:20.184 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.184 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:20.441 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.441 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:20.441 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.441 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.699 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.699 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:20.699 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.699 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:20.957 16:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.957 16:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:20.957 16:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:21.214 16:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:21.214 16:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:22.192 16:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:22.192 16:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:22.471 16:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.471 16:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:22.471 16:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.471 16:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:22.471 16:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.471 16:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:22.740 16:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.740 16:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:22.740 16:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.740 16:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:22.998 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.998 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:22.998 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.998 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.998 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.998 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:22.998 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.998 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:23.256 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.256 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:23.256 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.256 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:23.513 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.513 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:23.513 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:23.771 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:24.029 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:24.963 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:24.963 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:24.963 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.963 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:25.221 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.221 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:25.221 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.221 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:25.479 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.479 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:25.479 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.479 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:25.479 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.479 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:25.479 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.479 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:25.736 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.736 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:25.736 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.736 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:25.995 16:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.995 16:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:25.995 16:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.995 16:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:26.253 16:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.253 16:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:26.253 16:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:26.511 16:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:26.769 16:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:27.703 16:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:27.703 16:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:27.703 16:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.703 16:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:27.961 16:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.961 16:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:27.961 16:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:27.961 16:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.961 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.961 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:27.961 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.961 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:28.218 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.218 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:28.218 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.218 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:28.475 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.475 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:28.475 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:28.475 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.733 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.733 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:28.733 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.733 16:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:28.990 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:28.990 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2030836 00:24:28.990 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2030836 ']' 00:24:28.990 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2030836 00:24:28.990 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:28.990 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.990 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2030836 00:24:28.990 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:28.990 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:28.990 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2030836' 00:24:28.990 killing process with pid 2030836 00:24:28.990 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2030836 00:24:28.990 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2030836 00:24:28.990 { 00:24:28.990 "results": [ 00:24:28.990 { 00:24:28.990 "job": "Nvme0n1", 00:24:28.990 "core_mask": "0x4", 00:24:28.990 "workload": "verify", 00:24:28.990 "status": "terminated", 00:24:28.990 "verify_range": { 00:24:28.990 "start": 0, 00:24:28.990 "length": 16384 00:24:28.990 }, 00:24:28.990 "queue_depth": 128, 00:24:28.990 "io_size": 4096, 00:24:28.990 "runtime": 28.728768, 00:24:28.991 "iops": 10657.366163421975, 00:24:28.991 "mibps": 41.63033657586709, 00:24:28.991 "io_failed": 0, 00:24:28.991 "io_timeout": 0, 00:24:28.991 "avg_latency_us": 11991.320615904517, 00:24:28.991 "min_latency_us": 1209.2952380952381, 00:24:28.991 "max_latency_us": 3019898.88 00:24:28.991 } 00:24:28.991 ], 00:24:28.991 "core_count": 1 00:24:28.991 } 00:24:29.250 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2030836 00:24:29.250 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:29.250 [2024-11-20 16:25:29.916838] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:24:29.250 [2024-11-20 16:25:29.916893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2030836 ] 00:24:29.250 [2024-11-20 16:25:29.988689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.250 [2024-11-20 16:25:30.036044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.250 Running I/O for 90 seconds... 00:24:29.250 11612.00 IOPS, 45.36 MiB/s [2024-11-20T15:26:00.484Z] 11616.00 IOPS, 45.38 MiB/s [2024-11-20T15:26:00.484Z] 11650.33 IOPS, 45.51 MiB/s [2024-11-20T15:26:00.484Z] 11573.50 IOPS, 45.21 MiB/s [2024-11-20T15:26:00.484Z] 11571.40 IOPS, 45.20 MiB/s [2024-11-20T15:26:00.484Z] 11555.33 IOPS, 45.14 MiB/s [2024-11-20T15:26:00.484Z] 11548.00 IOPS, 45.11 MiB/s [2024-11-20T15:26:00.484Z] 11528.88 IOPS, 45.03 MiB/s [2024-11-20T15:26:00.484Z] 11506.33 IOPS, 44.95 MiB/s [2024-11-20T15:26:00.484Z] 11506.00 IOPS, 44.95 MiB/s [2024-11-20T15:26:00.484Z] 11512.55 IOPS, 44.97 MiB/s [2024-11-20T15:26:00.484Z] 11519.42 IOPS, 45.00 MiB/s [2024-11-20T15:26:00.484Z] [2024-11-20 16:25:44.025801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.250 [2024-11-20 16:25:44.025840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.250 [2024-11-20 16:25:44.025890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.250 [2024-11-20 16:25:44.025899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.250 [2024-11-20 16:25:44.025912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.250 [2024-11-20 16:25:44.025920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.250 [2024-11-20 16:25:44.025932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.250 [2024-11-20 16:25:44.025939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.250 [2024-11-20 16:25:44.025951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.250 [2024-11-20 16:25:44.025958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.250 [2024-11-20 16:25:44.025970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.250 [2024-11-20 16:25:44.025977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.250 [2024-11-20 16:25:44.025989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.250 [2024-11-20 16:25:44.025996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.250 [2024-11-20 16:25:44.026008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.250 [2024-11-20 16:25:44.026015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.250 [2024-11-20 16:25:44.026027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.250 [2024-11-20 16:25:44.026033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.251 [2024-11-20 16:25:44.026660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-11-20 16:25:44.026715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.251 [2024-11-20 16:25:44.026727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-11-20 16:25:44.026733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.026745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-11-20 16:25:44.026751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.026765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-11-20 16:25:44.026771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.026785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-11-20 16:25:44.026792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-11-20 16:25:44.027034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:29.252 [2024-11-20 16:25:44.027657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.252 [2024-11-20 16:25:44.027664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.027985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.027999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.253 [2024-11-20 16:25:44.028541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.253 [2024-11-20 16:25:44.028548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.254 [2024-11-20 16:25:44.028572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.254 [2024-11-20 16:25:44.028596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.254 [2024-11-20 16:25:44.028620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.254 [2024-11-20 16:25:44.028644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.254 [2024-11-20 16:25:44.028668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.254 [2024-11-20 16:25:44.028691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.254 [2024-11-20 16:25:44.028716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:44.028740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:44.028763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:44.028786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:44.028810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:44.028833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:44.028856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:44.028880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:44.028897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.254 [2024-11-20 16:25:44.028903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.254 11257.62 IOPS, 43.98 MiB/s [2024-11-20T15:26:00.488Z] 10453.50 IOPS, 40.83 MiB/s [2024-11-20T15:26:00.488Z] 9756.60 IOPS, 38.11 MiB/s [2024-11-20T15:26:00.488Z] 9359.38 IOPS, 36.56 MiB/s [2024-11-20T15:26:00.488Z] 9484.82 IOPS, 37.05 MiB/s [2024-11-20T15:26:00.488Z] 9589.11 IOPS, 37.46 MiB/s [2024-11-20T15:26:00.488Z] 9776.37 IOPS, 38.19 MiB/s [2024-11-20T15:26:00.488Z] 9950.65 IOPS, 38.87 MiB/s [2024-11-20T15:26:00.488Z] 10111.81 IOPS, 39.50 MiB/s [2024-11-20T15:26:00.488Z] 10174.09 IOPS, 39.74 MiB/s [2024-11-20T15:26:00.488Z] 10226.26 IOPS, 39.95 MiB/s [2024-11-20T15:26:00.488Z] 10303.21 IOPS, 40.25 MiB/s [2024-11-20T15:26:00.488Z] 10437.12 IOPS, 40.77 MiB/s [2024-11-20T15:26:00.488Z] 10545.23 IOPS, 41.19 MiB/s [2024-11-20T15:26:00.488Z] [2024-11-20 16:25:57.709058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.254 [2024-11-20 16:25:57.709415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.254 [2024-11-20 16:25:57.709427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.709434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.255 [2024-11-20 16:25:57.710666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.255 [2024-11-20 16:25:57.710684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.255 [2024-11-20 16:25:57.710722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.255 [2024-11-20 16:25:57.710949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.255 [2024-11-20 16:25:57.710970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.710982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.255 [2024-11-20 16:25:57.710989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.711000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.255 [2024-11-20 16:25:57.711007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.255 [2024-11-20 16:25:57.711019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.255 [2024-11-20 16:25:57.711026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.256 10608.41 IOPS, 41.44 MiB/s [2024-11-20T15:26:00.490Z] 10639.57 IOPS, 41.56 MiB/s [2024-11-20T15:26:00.490Z] Received shutdown signal, test time was about 28.729419 seconds 00:24:29.256 00:24:29.256 Latency(us) 00:24:29.256 [2024-11-20T15:26:00.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.256 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:29.256 Verification LBA range: start 0x0 length 0x4000 00:24:29.256 Nvme0n1 : 28.73 10657.37 41.63 0.00 0.00 11991.32 1209.30 3019898.88 00:24:29.256 [2024-11-20T15:26:00.490Z] =================================================================================================================== 00:24:29.256 [2024-11-20T15:26:00.490Z] Total : 10657.37 41.63 0.00 0.00 11991.32 1209.30 3019898.88 00:24:29.256 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.256 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:29.256 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:29.256 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:29.256 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:29.256 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:29.256 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.256 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:29.256 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.256 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:29.256 rmmod nvme_tcp 00:24:29.256 rmmod nvme_fabrics 00:24:29.514 rmmod nvme_keyring 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2030361 ']' 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2030361 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2030361 ']' 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2030361 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2030361 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2030361' 00:24:29.514 killing process with pid 2030361 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2030361 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2030361 00:24:29.514 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:29.773 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:29.773 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:29.773 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:29.773 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:29.774 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:29.774 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:29.774 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.774 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.774 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.774 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.774 16:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.679 16:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.679 00:24:31.679 real 0m41.233s 00:24:31.679 user 1m51.386s 00:24:31.679 sys 0m11.565s 00:24:31.679 16:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.679 16:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:31.679 ************************************ 00:24:31.679 END TEST nvmf_host_multipath_status 00:24:31.679 ************************************ 00:24:31.679 16:26:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:31.679 16:26:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.679 16:26:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.679 16:26:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.679 ************************************ 00:24:31.679 START TEST nvmf_discovery_remove_ifc 00:24:31.679 ************************************ 00:24:31.679 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:31.967 * Looking for test storage... 00:24:31.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.967 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:31.967 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:31.967 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.967 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:31.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.967 --rc genhtml_branch_coverage=1 00:24:31.967 --rc genhtml_function_coverage=1 00:24:31.968 --rc genhtml_legend=1 00:24:31.968 --rc geninfo_all_blocks=1 00:24:31.968 --rc geninfo_unexecuted_blocks=1 00:24:31.968 00:24:31.968 ' 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:31.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.968 --rc genhtml_branch_coverage=1 00:24:31.968 --rc genhtml_function_coverage=1 00:24:31.968 --rc genhtml_legend=1 00:24:31.968 --rc geninfo_all_blocks=1 00:24:31.968 --rc geninfo_unexecuted_blocks=1 00:24:31.968 00:24:31.968 ' 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:31.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.968 --rc genhtml_branch_coverage=1 00:24:31.968 --rc genhtml_function_coverage=1 00:24:31.968 --rc genhtml_legend=1 00:24:31.968 --rc geninfo_all_blocks=1 00:24:31.968 --rc geninfo_unexecuted_blocks=1 00:24:31.968 00:24:31.968 ' 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:31.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.968 --rc genhtml_branch_coverage=1 00:24:31.968 --rc genhtml_function_coverage=1 00:24:31.968 --rc genhtml_legend=1 00:24:31.968 --rc geninfo_all_blocks=1 00:24:31.968 --rc geninfo_unexecuted_blocks=1 00:24:31.968 00:24:31.968 ' 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.968 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.542 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:38.543 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:38.543 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:38.543 Found net devices under 0000:86:00.0: cvl_0_0 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:38.543 Found net devices under 0000:86:00.1: cvl_0_1 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:24:38.543 00:24:38.543 --- 10.0.0.2 ping statistics --- 00:24:38.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.543 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:24:38.543 00:24:38.543 --- 10.0.0.1 ping statistics --- 00:24:38.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.543 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.543 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.543 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:38.543 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.543 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.543 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.543 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2039383 00:24:38.543 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:38.543 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2039383 00:24:38.543 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2039383 ']' 00:24:38.543 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.543 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.543 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.543 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.543 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.544 [2024-11-20 16:26:09.096949] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:24:38.544 [2024-11-20 16:26:09.096998] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.544 [2024-11-20 16:26:09.175705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.544 [2024-11-20 16:26:09.216504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.544 [2024-11-20 16:26:09.216538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.544 [2024-11-20 16:26:09.216545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.544 [2024-11-20 16:26:09.216552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.544 [2024-11-20 16:26:09.216557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.544 [2024-11-20 16:26:09.217087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.544 [2024-11-20 16:26:09.360789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.544 [2024-11-20 16:26:09.368968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:38.544 null0 00:24:38.544 [2024-11-20 16:26:09.400959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2039427 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2039427 /tmp/host.sock 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2039427 ']' 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:38.544 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.544 [2024-11-20 16:26:09.467880] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:24:38.544 [2024-11-20 16:26:09.467920] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039427 ] 00:24:38.544 [2024-11-20 16:26:09.539310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.544 [2024-11-20 16:26:09.581834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.544 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.918 [2024-11-20 16:26:10.768352] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:39.918 [2024-11-20 16:26:10.768376] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:39.918 [2024-11-20 16:26:10.768392] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:39.918 [2024-11-20 16:26:10.855645] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:39.918 [2024-11-20 16:26:11.079746] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:39.918 [2024-11-20 16:26:11.080412] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1fb0a10:1 started. 00:24:39.918 [2024-11-20 16:26:11.081754] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:39.918 [2024-11-20 16:26:11.081793] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:39.918 [2024-11-20 16:26:11.081812] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:39.918 [2024-11-20 16:26:11.081823] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:39.918 [2024-11-20 16:26:11.081841] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:39.918 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.918 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:39.918 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.918 [2024-11-20 16:26:11.086520] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1fb0a10 was disconnected and freed. delete nvme_qpair. 00:24:39.918 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.918 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.918 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.918 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.918 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.918 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.918 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.918 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:39.918 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:39.918 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:40.177 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:40.177 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.177 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.177 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.177 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.177 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:40.177 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.177 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.177 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.177 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:40.177 16:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.110 16:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.110 16:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.110 16:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.110 16:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.110 16:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.110 16:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.110 16:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.111 16:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.368 16:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:41.368 16:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:42.302 16:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:42.302 16:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.302 16:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:42.302 16:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:42.302 16:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.302 16:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:42.302 16:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.302 16:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.302 16:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:42.302 16:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.236 16:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.236 16:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.236 16:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.236 16:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.236 16:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.236 16:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.236 16:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.236 16:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.236 16:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:43.236 16:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.611 16:26:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.611 16:26:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.611 16:26:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.611 16:26:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.611 16:26:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.611 16:26:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.611 16:26:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.611 16:26:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.611 16:26:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:44.611 16:26:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:45.546 16:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.546 16:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.546 16:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.546 16:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.546 16:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.546 16:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.546 16:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.546 [2024-11-20 16:26:16.523417] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:45.546 [2024-11-20 16:26:16.523451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.546 [2024-11-20 16:26:16.523461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.546 [2024-11-20 16:26:16.523470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.546 [2024-11-20 16:26:16.523477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.546 [2024-11-20 16:26:16.523484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.546 [2024-11-20 16:26:16.523491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.546 [2024-11-20 16:26:16.523498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.546 [2024-11-20 16:26:16.523505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.546 [2024-11-20 16:26:16.523512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.546 [2024-11-20 16:26:16.523519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.546 [2024-11-20 16:26:16.523525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d220 is same with the state(6) to be set 00:24:45.546 16:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.546 [2024-11-20 16:26:16.533439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8d220 (9): Bad file descriptor 00:24:45.546 [2024-11-20 16:26:16.543474] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:45.546 [2024-11-20 16:26:16.543486] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:45.546 [2024-11-20 16:26:16.543490] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:45.546 [2024-11-20 16:26:16.543494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:45.546 [2024-11-20 16:26:16.543513] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:45.546 16:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:45.546 16:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:46.483 16:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:46.483 16:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.483 16:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:46.483 16:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.483 16:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:46.483 16:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.483 16:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:46.483 [2024-11-20 16:26:17.607256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:46.483 [2024-11-20 16:26:17.607335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f8d220 with addr=10.0.0.2, port=4420 00:24:46.483 [2024-11-20 16:26:17.607366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d220 is same with the state(6) to be set 00:24:46.483 [2024-11-20 16:26:17.607418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8d220 (9): Bad file descriptor 00:24:46.483 [2024-11-20 16:26:17.608361] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:46.483 [2024-11-20 16:26:17.608424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.483 [2024-11-20 16:26:17.608447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.483 [2024-11-20 16:26:17.608470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.483 [2024-11-20 16:26:17.608489] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.483 [2024-11-20 16:26:17.608506] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.483 [2024-11-20 16:26:17.608519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.483 [2024-11-20 16:26:17.608542] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.483 [2024-11-20 16:26:17.608556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.483 16:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.483 16:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:46.483 16:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:47.418 [2024-11-20 16:26:18.611069] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:47.418 [2024-11-20 16:26:18.611089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:47.418 [2024-11-20 16:26:18.611100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:47.418 [2024-11-20 16:26:18.611106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:47.418 [2024-11-20 16:26:18.611112] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:47.418 [2024-11-20 16:26:18.611119] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:47.418 [2024-11-20 16:26:18.611123] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:47.418 [2024-11-20 16:26:18.611127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:47.418 [2024-11-20 16:26:18.611145] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:47.418 [2024-11-20 16:26:18.611167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.418 [2024-11-20 16:26:18.611175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 16:26:18.611184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.418 [2024-11-20 16:26:18.611191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 16:26:18.611198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.418 [2024-11-20 16:26:18.611208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 16:26:18.611215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.418 [2024-11-20 16:26:18.611221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 16:26:18.611228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.418 [2024-11-20 16:26:18.611234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 16:26:18.611240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:47.418 [2024-11-20 16:26:18.611662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7c900 (9): Bad file descriptor 00:24:47.418 [2024-11-20 16:26:18.612673] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:47.418 [2024-11-20 16:26:18.612683] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:47.418 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.418 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.418 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.418 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.418 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.418 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.418 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.418 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:47.678 16:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:48.611 16:26:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:48.611 16:26:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.612 16:26:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:48.612 16:26:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.612 16:26:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:48.612 16:26:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.612 16:26:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:48.612 16:26:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.870 16:26:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:48.870 16:26:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:49.436 [2024-11-20 16:26:20.622275] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:49.436 [2024-11-20 16:26:20.622297] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:49.436 [2024-11-20 16:26:20.622308] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:49.694 [2024-11-20 16:26:20.708583] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:49.694 [2024-11-20 16:26:20.844452] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:49.694 [2024-11-20 16:26:20.845097] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1f81830:1 started. 00:24:49.694 [2024-11-20 16:26:20.846159] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:49.694 [2024-11-20 16:26:20.846189] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:49.694 [2024-11-20 16:26:20.846211] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:49.694 [2024-11-20 16:26:20.846224] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:49.694 [2024-11-20 16:26:20.846232] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:49.694 [2024-11-20 16:26:20.851546] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1f81830 was disconnected and freed. delete nvme_qpair. 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2039427 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2039427 ']' 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2039427 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.694 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2039427 00:24:49.953 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:49.953 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:49.953 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2039427' 00:24:49.953 killing process with pid 2039427 00:24:49.953 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2039427 00:24:49.953 16:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2039427 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:49.953 rmmod nvme_tcp 00:24:49.953 rmmod nvme_fabrics 00:24:49.953 rmmod nvme_keyring 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2039383 ']' 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2039383 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2039383 ']' 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2039383 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:49.953 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2039383 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2039383' 00:24:50.212 killing process with pid 2039383 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2039383 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2039383 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.212 16:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:52.749 00:24:52.749 real 0m20.572s 00:24:52.749 user 0m24.838s 00:24:52.749 sys 0m5.875s 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.749 ************************************ 00:24:52.749 END TEST nvmf_discovery_remove_ifc 00:24:52.749 ************************************ 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.749 ************************************ 00:24:52.749 START TEST nvmf_identify_kernel_target 00:24:52.749 ************************************ 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:52.749 * Looking for test storage... 00:24:52.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:52.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.749 --rc genhtml_branch_coverage=1 00:24:52.749 --rc genhtml_function_coverage=1 00:24:52.749 --rc genhtml_legend=1 00:24:52.749 --rc geninfo_all_blocks=1 00:24:52.749 --rc geninfo_unexecuted_blocks=1 00:24:52.749 00:24:52.749 ' 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:52.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.749 --rc genhtml_branch_coverage=1 00:24:52.749 --rc genhtml_function_coverage=1 00:24:52.749 --rc genhtml_legend=1 00:24:52.749 --rc geninfo_all_blocks=1 00:24:52.749 --rc geninfo_unexecuted_blocks=1 00:24:52.749 00:24:52.749 ' 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:52.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.749 --rc genhtml_branch_coverage=1 00:24:52.749 --rc genhtml_function_coverage=1 00:24:52.749 --rc genhtml_legend=1 00:24:52.749 --rc geninfo_all_blocks=1 00:24:52.749 --rc geninfo_unexecuted_blocks=1 00:24:52.749 00:24:52.749 ' 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:52.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.749 --rc genhtml_branch_coverage=1 00:24:52.749 --rc genhtml_function_coverage=1 00:24:52.749 --rc genhtml_legend=1 00:24:52.749 --rc geninfo_all_blocks=1 00:24:52.749 --rc geninfo_unexecuted_blocks=1 00:24:52.749 00:24:52.749 ' 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.749 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:52.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:52.750 16:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:59.320 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:59.320 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:59.320 Found net devices under 0000:86:00.0: cvl_0_0 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:59.320 Found net devices under 0000:86:00.1: cvl_0_1 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:59.320 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:59.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:24:59.321 00:24:59.321 --- 10.0.0.2 ping statistics --- 00:24:59.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.321 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:24:59.321 00:24:59.321 --- 10.0.0.1 ping statistics --- 00:24:59.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.321 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:59.321 16:26:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:01.223 Waiting for block devices as requested 00:25:01.481 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:01.481 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:01.481 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:01.738 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:01.738 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:01.738 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:01.995 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:01.995 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:01.995 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:01.995 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:02.253 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:02.253 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:02.253 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:02.512 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:02.512 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:02.512 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:02.771 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:02.771 No valid GPT data, bailing 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:02.771 16:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:03.032 00:25:03.032 Discovery Log Number of Records 2, Generation counter 2 00:25:03.032 =====Discovery Log Entry 0====== 00:25:03.032 trtype: tcp 00:25:03.032 adrfam: ipv4 00:25:03.032 subtype: current discovery subsystem 00:25:03.032 treq: not specified, sq flow control disable supported 00:25:03.032 portid: 1 00:25:03.032 trsvcid: 4420 00:25:03.032 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:03.032 traddr: 10.0.0.1 00:25:03.032 eflags: none 00:25:03.032 sectype: none 00:25:03.032 =====Discovery Log Entry 1====== 00:25:03.032 trtype: tcp 00:25:03.032 adrfam: ipv4 00:25:03.032 subtype: nvme subsystem 00:25:03.032 treq: not specified, sq flow control disable supported 00:25:03.032 portid: 1 00:25:03.032 trsvcid: 4420 00:25:03.032 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:03.032 traddr: 10.0.0.1 00:25:03.032 eflags: none 00:25:03.032 sectype: none 00:25:03.032 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:03.032 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:03.032 ===================================================== 00:25:03.032 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:03.032 ===================================================== 00:25:03.032 Controller Capabilities/Features 00:25:03.032 ================================ 00:25:03.032 Vendor ID: 0000 00:25:03.032 Subsystem Vendor ID: 0000 00:25:03.032 Serial Number: 7f0a24f28aa91d9bf1aa 00:25:03.032 Model Number: Linux 00:25:03.032 Firmware Version: 6.8.9-20 00:25:03.032 Recommended Arb Burst: 0 00:25:03.032 IEEE OUI Identifier: 00 00 00 00:25:03.032 Multi-path I/O 00:25:03.032 May have multiple subsystem ports: No 00:25:03.032 May have multiple controllers: No 00:25:03.032 Associated with SR-IOV VF: No 00:25:03.032 Max Data Transfer Size: Unlimited 00:25:03.032 Max Number of Namespaces: 0 00:25:03.032 Max Number of I/O Queues: 1024 00:25:03.032 NVMe Specification Version (VS): 1.3 00:25:03.032 NVMe Specification Version (Identify): 1.3 00:25:03.032 Maximum Queue Entries: 1024 00:25:03.032 Contiguous Queues Required: No 00:25:03.032 Arbitration Mechanisms Supported 00:25:03.032 Weighted Round Robin: Not Supported 00:25:03.032 Vendor Specific: Not Supported 00:25:03.032 Reset Timeout: 7500 ms 00:25:03.032 Doorbell Stride: 4 bytes 00:25:03.032 NVM Subsystem Reset: Not Supported 00:25:03.032 Command Sets Supported 00:25:03.032 NVM Command Set: Supported 00:25:03.032 Boot Partition: Not Supported 00:25:03.032 Memory Page Size Minimum: 4096 bytes 00:25:03.032 Memory Page Size Maximum: 4096 bytes 00:25:03.032 Persistent Memory Region: Not Supported 00:25:03.032 Optional Asynchronous Events Supported 00:25:03.032 Namespace Attribute Notices: Not Supported 00:25:03.032 Firmware Activation Notices: Not Supported 00:25:03.032 ANA Change Notices: Not Supported 00:25:03.032 PLE Aggregate Log Change Notices: Not Supported 00:25:03.032 LBA Status Info Alert Notices: Not Supported 00:25:03.032 EGE Aggregate Log Change Notices: Not Supported 00:25:03.032 Normal NVM Subsystem Shutdown event: Not Supported 00:25:03.032 Zone Descriptor Change Notices: Not Supported 00:25:03.032 Discovery Log Change Notices: Supported 00:25:03.032 Controller Attributes 00:25:03.032 128-bit Host Identifier: Not Supported 00:25:03.032 Non-Operational Permissive Mode: Not Supported 00:25:03.032 NVM Sets: Not Supported 00:25:03.032 Read Recovery Levels: Not Supported 00:25:03.032 Endurance Groups: Not Supported 00:25:03.032 Predictable Latency Mode: Not Supported 00:25:03.032 Traffic Based Keep ALive: Not Supported 00:25:03.032 Namespace Granularity: Not Supported 00:25:03.032 SQ Associations: Not Supported 00:25:03.032 UUID List: Not Supported 00:25:03.032 Multi-Domain Subsystem: Not Supported 00:25:03.032 Fixed Capacity Management: Not Supported 00:25:03.032 Variable Capacity Management: Not Supported 00:25:03.032 Delete Endurance Group: Not Supported 00:25:03.032 Delete NVM Set: Not Supported 00:25:03.032 Extended LBA Formats Supported: Not Supported 00:25:03.032 Flexible Data Placement Supported: Not Supported 00:25:03.032 00:25:03.032 Controller Memory Buffer Support 00:25:03.032 ================================ 00:25:03.032 Supported: No 00:25:03.032 00:25:03.032 Persistent Memory Region Support 00:25:03.032 ================================ 00:25:03.032 Supported: No 00:25:03.032 00:25:03.032 Admin Command Set Attributes 00:25:03.032 ============================ 00:25:03.032 Security Send/Receive: Not Supported 00:25:03.032 Format NVM: Not Supported 00:25:03.032 Firmware Activate/Download: Not Supported 00:25:03.032 Namespace Management: Not Supported 00:25:03.032 Device Self-Test: Not Supported 00:25:03.032 Directives: Not Supported 00:25:03.032 NVMe-MI: Not Supported 00:25:03.032 Virtualization Management: Not Supported 00:25:03.032 Doorbell Buffer Config: Not Supported 00:25:03.032 Get LBA Status Capability: Not Supported 00:25:03.032 Command & Feature Lockdown Capability: Not Supported 00:25:03.032 Abort Command Limit: 1 00:25:03.032 Async Event Request Limit: 1 00:25:03.032 Number of Firmware Slots: N/A 00:25:03.032 Firmware Slot 1 Read-Only: N/A 00:25:03.032 Firmware Activation Without Reset: N/A 00:25:03.032 Multiple Update Detection Support: N/A 00:25:03.032 Firmware Update Granularity: No Information Provided 00:25:03.032 Per-Namespace SMART Log: No 00:25:03.032 Asymmetric Namespace Access Log Page: Not Supported 00:25:03.032 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:03.032 Command Effects Log Page: Not Supported 00:25:03.032 Get Log Page Extended Data: Supported 00:25:03.032 Telemetry Log Pages: Not Supported 00:25:03.032 Persistent Event Log Pages: Not Supported 00:25:03.032 Supported Log Pages Log Page: May Support 00:25:03.032 Commands Supported & Effects Log Page: Not Supported 00:25:03.032 Feature Identifiers & Effects Log Page:May Support 00:25:03.032 NVMe-MI Commands & Effects Log Page: May Support 00:25:03.032 Data Area 4 for Telemetry Log: Not Supported 00:25:03.032 Error Log Page Entries Supported: 1 00:25:03.032 Keep Alive: Not Supported 00:25:03.032 00:25:03.032 NVM Command Set Attributes 00:25:03.032 ========================== 00:25:03.032 Submission Queue Entry Size 00:25:03.032 Max: 1 00:25:03.032 Min: 1 00:25:03.032 Completion Queue Entry Size 00:25:03.032 Max: 1 00:25:03.032 Min: 1 00:25:03.032 Number of Namespaces: 0 00:25:03.032 Compare Command: Not Supported 00:25:03.032 Write Uncorrectable Command: Not Supported 00:25:03.032 Dataset Management Command: Not Supported 00:25:03.032 Write Zeroes Command: Not Supported 00:25:03.032 Set Features Save Field: Not Supported 00:25:03.032 Reservations: Not Supported 00:25:03.032 Timestamp: Not Supported 00:25:03.032 Copy: Not Supported 00:25:03.032 Volatile Write Cache: Not Present 00:25:03.032 Atomic Write Unit (Normal): 1 00:25:03.032 Atomic Write Unit (PFail): 1 00:25:03.033 Atomic Compare & Write Unit: 1 00:25:03.033 Fused Compare & Write: Not Supported 00:25:03.033 Scatter-Gather List 00:25:03.033 SGL Command Set: Supported 00:25:03.033 SGL Keyed: Not Supported 00:25:03.033 SGL Bit Bucket Descriptor: Not Supported 00:25:03.033 SGL Metadata Pointer: Not Supported 00:25:03.033 Oversized SGL: Not Supported 00:25:03.033 SGL Metadata Address: Not Supported 00:25:03.033 SGL Offset: Supported 00:25:03.033 Transport SGL Data Block: Not Supported 00:25:03.033 Replay Protected Memory Block: Not Supported 00:25:03.033 00:25:03.033 Firmware Slot Information 00:25:03.033 ========================= 00:25:03.033 Active slot: 0 00:25:03.033 00:25:03.033 00:25:03.033 Error Log 00:25:03.033 ========= 00:25:03.033 00:25:03.033 Active Namespaces 00:25:03.033 ================= 00:25:03.033 Discovery Log Page 00:25:03.033 ================== 00:25:03.033 Generation Counter: 2 00:25:03.033 Number of Records: 2 00:25:03.033 Record Format: 0 00:25:03.033 00:25:03.033 Discovery Log Entry 0 00:25:03.033 ---------------------- 00:25:03.033 Transport Type: 3 (TCP) 00:25:03.033 Address Family: 1 (IPv4) 00:25:03.033 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:03.033 Entry Flags: 00:25:03.033 Duplicate Returned Information: 0 00:25:03.033 Explicit Persistent Connection Support for Discovery: 0 00:25:03.033 Transport Requirements: 00:25:03.033 Secure Channel: Not Specified 00:25:03.033 Port ID: 1 (0x0001) 00:25:03.033 Controller ID: 65535 (0xffff) 00:25:03.033 Admin Max SQ Size: 32 00:25:03.033 Transport Service Identifier: 4420 00:25:03.033 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:03.033 Transport Address: 10.0.0.1 00:25:03.033 Discovery Log Entry 1 00:25:03.033 ---------------------- 00:25:03.033 Transport Type: 3 (TCP) 00:25:03.033 Address Family: 1 (IPv4) 00:25:03.033 Subsystem Type: 2 (NVM Subsystem) 00:25:03.033 Entry Flags: 00:25:03.033 Duplicate Returned Information: 0 00:25:03.033 Explicit Persistent Connection Support for Discovery: 0 00:25:03.033 Transport Requirements: 00:25:03.033 Secure Channel: Not Specified 00:25:03.033 Port ID: 1 (0x0001) 00:25:03.033 Controller ID: 65535 (0xffff) 00:25:03.033 Admin Max SQ Size: 32 00:25:03.033 Transport Service Identifier: 4420 00:25:03.033 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:03.033 Transport Address: 10.0.0.1 00:25:03.033 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:03.033 get_feature(0x01) failed 00:25:03.033 get_feature(0x02) failed 00:25:03.033 get_feature(0x04) failed 00:25:03.033 ===================================================== 00:25:03.033 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:03.033 ===================================================== 00:25:03.033 Controller Capabilities/Features 00:25:03.033 ================================ 00:25:03.033 Vendor ID: 0000 00:25:03.033 Subsystem Vendor ID: 0000 00:25:03.033 Serial Number: dc4e33ae3a347e4b0167 00:25:03.033 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:03.033 Firmware Version: 6.8.9-20 00:25:03.033 Recommended Arb Burst: 6 00:25:03.033 IEEE OUI Identifier: 00 00 00 00:25:03.033 Multi-path I/O 00:25:03.033 May have multiple subsystem ports: Yes 00:25:03.033 May have multiple controllers: Yes 00:25:03.033 Associated with SR-IOV VF: No 00:25:03.033 Max Data Transfer Size: Unlimited 00:25:03.033 Max Number of Namespaces: 1024 00:25:03.033 Max Number of I/O Queues: 128 00:25:03.033 NVMe Specification Version (VS): 1.3 00:25:03.033 NVMe Specification Version (Identify): 1.3 00:25:03.033 Maximum Queue Entries: 1024 00:25:03.033 Contiguous Queues Required: No 00:25:03.033 Arbitration Mechanisms Supported 00:25:03.033 Weighted Round Robin: Not Supported 00:25:03.033 Vendor Specific: Not Supported 00:25:03.033 Reset Timeout: 7500 ms 00:25:03.033 Doorbell Stride: 4 bytes 00:25:03.033 NVM Subsystem Reset: Not Supported 00:25:03.033 Command Sets Supported 00:25:03.033 NVM Command Set: Supported 00:25:03.033 Boot Partition: Not Supported 00:25:03.033 Memory Page Size Minimum: 4096 bytes 00:25:03.033 Memory Page Size Maximum: 4096 bytes 00:25:03.033 Persistent Memory Region: Not Supported 00:25:03.033 Optional Asynchronous Events Supported 00:25:03.033 Namespace Attribute Notices: Supported 00:25:03.033 Firmware Activation Notices: Not Supported 00:25:03.033 ANA Change Notices: Supported 00:25:03.033 PLE Aggregate Log Change Notices: Not Supported 00:25:03.033 LBA Status Info Alert Notices: Not Supported 00:25:03.033 EGE Aggregate Log Change Notices: Not Supported 00:25:03.033 Normal NVM Subsystem Shutdown event: Not Supported 00:25:03.033 Zone Descriptor Change Notices: Not Supported 00:25:03.033 Discovery Log Change Notices: Not Supported 00:25:03.033 Controller Attributes 00:25:03.033 128-bit Host Identifier: Supported 00:25:03.033 Non-Operational Permissive Mode: Not Supported 00:25:03.033 NVM Sets: Not Supported 00:25:03.033 Read Recovery Levels: Not Supported 00:25:03.033 Endurance Groups: Not Supported 00:25:03.033 Predictable Latency Mode: Not Supported 00:25:03.033 Traffic Based Keep ALive: Supported 00:25:03.033 Namespace Granularity: Not Supported 00:25:03.033 SQ Associations: Not Supported 00:25:03.033 UUID List: Not Supported 00:25:03.033 Multi-Domain Subsystem: Not Supported 00:25:03.033 Fixed Capacity Management: Not Supported 00:25:03.033 Variable Capacity Management: Not Supported 00:25:03.033 Delete Endurance Group: Not Supported 00:25:03.033 Delete NVM Set: Not Supported 00:25:03.033 Extended LBA Formats Supported: Not Supported 00:25:03.033 Flexible Data Placement Supported: Not Supported 00:25:03.033 00:25:03.033 Controller Memory Buffer Support 00:25:03.033 ================================ 00:25:03.033 Supported: No 00:25:03.033 00:25:03.033 Persistent Memory Region Support 00:25:03.033 ================================ 00:25:03.033 Supported: No 00:25:03.033 00:25:03.033 Admin Command Set Attributes 00:25:03.033 ============================ 00:25:03.033 Security Send/Receive: Not Supported 00:25:03.033 Format NVM: Not Supported 00:25:03.033 Firmware Activate/Download: Not Supported 00:25:03.033 Namespace Management: Not Supported 00:25:03.033 Device Self-Test: Not Supported 00:25:03.033 Directives: Not Supported 00:25:03.033 NVMe-MI: Not Supported 00:25:03.033 Virtualization Management: Not Supported 00:25:03.033 Doorbell Buffer Config: Not Supported 00:25:03.033 Get LBA Status Capability: Not Supported 00:25:03.033 Command & Feature Lockdown Capability: Not Supported 00:25:03.033 Abort Command Limit: 4 00:25:03.033 Async Event Request Limit: 4 00:25:03.033 Number of Firmware Slots: N/A 00:25:03.033 Firmware Slot 1 Read-Only: N/A 00:25:03.033 Firmware Activation Without Reset: N/A 00:25:03.033 Multiple Update Detection Support: N/A 00:25:03.033 Firmware Update Granularity: No Information Provided 00:25:03.033 Per-Namespace SMART Log: Yes 00:25:03.033 Asymmetric Namespace Access Log Page: Supported 00:25:03.033 ANA Transition Time : 10 sec 00:25:03.033 00:25:03.033 Asymmetric Namespace Access Capabilities 00:25:03.033 ANA Optimized State : Supported 00:25:03.033 ANA Non-Optimized State : Supported 00:25:03.033 ANA Inaccessible State : Supported 00:25:03.033 ANA Persistent Loss State : Supported 00:25:03.033 ANA Change State : Supported 00:25:03.033 ANAGRPID is not changed : No 00:25:03.033 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:03.033 00:25:03.033 ANA Group Identifier Maximum : 128 00:25:03.033 Number of ANA Group Identifiers : 128 00:25:03.033 Max Number of Allowed Namespaces : 1024 00:25:03.033 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:03.033 Command Effects Log Page: Supported 00:25:03.033 Get Log Page Extended Data: Supported 00:25:03.033 Telemetry Log Pages: Not Supported 00:25:03.033 Persistent Event Log Pages: Not Supported 00:25:03.033 Supported Log Pages Log Page: May Support 00:25:03.033 Commands Supported & Effects Log Page: Not Supported 00:25:03.033 Feature Identifiers & Effects Log Page:May Support 00:25:03.033 NVMe-MI Commands & Effects Log Page: May Support 00:25:03.033 Data Area 4 for Telemetry Log: Not Supported 00:25:03.033 Error Log Page Entries Supported: 128 00:25:03.033 Keep Alive: Supported 00:25:03.033 Keep Alive Granularity: 1000 ms 00:25:03.033 00:25:03.033 NVM Command Set Attributes 00:25:03.033 ========================== 00:25:03.033 Submission Queue Entry Size 00:25:03.033 Max: 64 00:25:03.033 Min: 64 00:25:03.033 Completion Queue Entry Size 00:25:03.033 Max: 16 00:25:03.033 Min: 16 00:25:03.033 Number of Namespaces: 1024 00:25:03.034 Compare Command: Not Supported 00:25:03.034 Write Uncorrectable Command: Not Supported 00:25:03.034 Dataset Management Command: Supported 00:25:03.034 Write Zeroes Command: Supported 00:25:03.034 Set Features Save Field: Not Supported 00:25:03.034 Reservations: Not Supported 00:25:03.034 Timestamp: Not Supported 00:25:03.034 Copy: Not Supported 00:25:03.034 Volatile Write Cache: Present 00:25:03.034 Atomic Write Unit (Normal): 1 00:25:03.034 Atomic Write Unit (PFail): 1 00:25:03.034 Atomic Compare & Write Unit: 1 00:25:03.034 Fused Compare & Write: Not Supported 00:25:03.034 Scatter-Gather List 00:25:03.034 SGL Command Set: Supported 00:25:03.034 SGL Keyed: Not Supported 00:25:03.034 SGL Bit Bucket Descriptor: Not Supported 00:25:03.034 SGL Metadata Pointer: Not Supported 00:25:03.034 Oversized SGL: Not Supported 00:25:03.034 SGL Metadata Address: Not Supported 00:25:03.034 SGL Offset: Supported 00:25:03.034 Transport SGL Data Block: Not Supported 00:25:03.034 Replay Protected Memory Block: Not Supported 00:25:03.034 00:25:03.034 Firmware Slot Information 00:25:03.034 ========================= 00:25:03.034 Active slot: 0 00:25:03.034 00:25:03.034 Asymmetric Namespace Access 00:25:03.034 =========================== 00:25:03.034 Change Count : 0 00:25:03.034 Number of ANA Group Descriptors : 1 00:25:03.034 ANA Group Descriptor : 0 00:25:03.034 ANA Group ID : 1 00:25:03.034 Number of NSID Values : 1 00:25:03.034 Change Count : 0 00:25:03.034 ANA State : 1 00:25:03.034 Namespace Identifier : 1 00:25:03.034 00:25:03.034 Commands Supported and Effects 00:25:03.034 ============================== 00:25:03.034 Admin Commands 00:25:03.034 -------------- 00:25:03.034 Get Log Page (02h): Supported 00:25:03.034 Identify (06h): Supported 00:25:03.034 Abort (08h): Supported 00:25:03.034 Set Features (09h): Supported 00:25:03.034 Get Features (0Ah): Supported 00:25:03.034 Asynchronous Event Request (0Ch): Supported 00:25:03.034 Keep Alive (18h): Supported 00:25:03.034 I/O Commands 00:25:03.034 ------------ 00:25:03.034 Flush (00h): Supported 00:25:03.034 Write (01h): Supported LBA-Change 00:25:03.034 Read (02h): Supported 00:25:03.034 Write Zeroes (08h): Supported LBA-Change 00:25:03.034 Dataset Management (09h): Supported 00:25:03.034 00:25:03.034 Error Log 00:25:03.034 ========= 00:25:03.034 Entry: 0 00:25:03.034 Error Count: 0x3 00:25:03.034 Submission Queue Id: 0x0 00:25:03.034 Command Id: 0x5 00:25:03.034 Phase Bit: 0 00:25:03.034 Status Code: 0x2 00:25:03.034 Status Code Type: 0x0 00:25:03.034 Do Not Retry: 1 00:25:03.034 Error Location: 0x28 00:25:03.034 LBA: 0x0 00:25:03.034 Namespace: 0x0 00:25:03.034 Vendor Log Page: 0x0 00:25:03.034 ----------- 00:25:03.034 Entry: 1 00:25:03.034 Error Count: 0x2 00:25:03.034 Submission Queue Id: 0x0 00:25:03.034 Command Id: 0x5 00:25:03.034 Phase Bit: 0 00:25:03.034 Status Code: 0x2 00:25:03.034 Status Code Type: 0x0 00:25:03.034 Do Not Retry: 1 00:25:03.034 Error Location: 0x28 00:25:03.034 LBA: 0x0 00:25:03.034 Namespace: 0x0 00:25:03.034 Vendor Log Page: 0x0 00:25:03.034 ----------- 00:25:03.034 Entry: 2 00:25:03.034 Error Count: 0x1 00:25:03.034 Submission Queue Id: 0x0 00:25:03.034 Command Id: 0x4 00:25:03.034 Phase Bit: 0 00:25:03.034 Status Code: 0x2 00:25:03.034 Status Code Type: 0x0 00:25:03.034 Do Not Retry: 1 00:25:03.034 Error Location: 0x28 00:25:03.034 LBA: 0x0 00:25:03.034 Namespace: 0x0 00:25:03.034 Vendor Log Page: 0x0 00:25:03.034 00:25:03.034 Number of Queues 00:25:03.034 ================ 00:25:03.034 Number of I/O Submission Queues: 128 00:25:03.034 Number of I/O Completion Queues: 128 00:25:03.034 00:25:03.034 ZNS Specific Controller Data 00:25:03.034 ============================ 00:25:03.034 Zone Append Size Limit: 0 00:25:03.034 00:25:03.034 00:25:03.034 Active Namespaces 00:25:03.034 ================= 00:25:03.034 get_feature(0x05) failed 00:25:03.034 Namespace ID:1 00:25:03.034 Command Set Identifier: NVM (00h) 00:25:03.034 Deallocate: Supported 00:25:03.034 Deallocated/Unwritten Error: Not Supported 00:25:03.034 Deallocated Read Value: Unknown 00:25:03.034 Deallocate in Write Zeroes: Not Supported 00:25:03.034 Deallocated Guard Field: 0xFFFF 00:25:03.034 Flush: Supported 00:25:03.034 Reservation: Not Supported 00:25:03.034 Namespace Sharing Capabilities: Multiple Controllers 00:25:03.034 Size (in LBAs): 3125627568 (1490GiB) 00:25:03.034 Capacity (in LBAs): 3125627568 (1490GiB) 00:25:03.034 Utilization (in LBAs): 3125627568 (1490GiB) 00:25:03.034 UUID: 19275e11-a861-497e-ba8b-b39c6bc77c6a 00:25:03.034 Thin Provisioning: Not Supported 00:25:03.034 Per-NS Atomic Units: Yes 00:25:03.034 Atomic Boundary Size (Normal): 0 00:25:03.034 Atomic Boundary Size (PFail): 0 00:25:03.034 Atomic Boundary Offset: 0 00:25:03.034 NGUID/EUI64 Never Reused: No 00:25:03.034 ANA group ID: 1 00:25:03.034 Namespace Write Protected: No 00:25:03.034 Number of LBA Formats: 1 00:25:03.034 Current LBA Format: LBA Format #00 00:25:03.034 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:03.034 00:25:03.034 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:03.034 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:03.034 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:03.034 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.034 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:03.034 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.034 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.034 rmmod nvme_tcp 00:25:03.034 rmmod nvme_fabrics 00:25:03.034 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.034 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:03.034 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:03.034 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:03.293 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.293 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.293 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.293 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:03.293 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:03.293 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.293 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.293 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.293 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.293 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.293 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.293 16:26:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.195 16:26:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.195 16:26:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:05.195 16:26:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:05.195 16:26:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:05.195 16:26:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:05.195 16:26:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:05.195 16:26:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:05.195 16:26:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:05.195 16:26:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:05.195 16:26:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:05.195 16:26:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:08.483 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:08.483 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:09.861 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:09.861 00:25:09.861 real 0m17.351s 00:25:09.861 user 0m4.274s 00:25:09.861 sys 0m8.859s 00:25:09.861 16:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:09.861 16:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.861 ************************************ 00:25:09.861 END TEST nvmf_identify_kernel_target 00:25:09.861 ************************************ 00:25:09.861 16:26:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:09.861 16:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:09.861 16:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:09.861 16:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.861 ************************************ 00:25:09.861 START TEST nvmf_auth_host 00:25:09.861 ************************************ 00:25:09.861 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:09.861 * Looking for test storage... 00:25:09.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.861 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:09.861 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:09.861 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:10.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.120 --rc genhtml_branch_coverage=1 00:25:10.120 --rc genhtml_function_coverage=1 00:25:10.120 --rc genhtml_legend=1 00:25:10.120 --rc geninfo_all_blocks=1 00:25:10.120 --rc geninfo_unexecuted_blocks=1 00:25:10.120 00:25:10.120 ' 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:10.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.120 --rc genhtml_branch_coverage=1 00:25:10.120 --rc genhtml_function_coverage=1 00:25:10.120 --rc genhtml_legend=1 00:25:10.120 --rc geninfo_all_blocks=1 00:25:10.120 --rc geninfo_unexecuted_blocks=1 00:25:10.120 00:25:10.120 ' 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:10.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.120 --rc genhtml_branch_coverage=1 00:25:10.120 --rc genhtml_function_coverage=1 00:25:10.120 --rc genhtml_legend=1 00:25:10.120 --rc geninfo_all_blocks=1 00:25:10.120 --rc geninfo_unexecuted_blocks=1 00:25:10.120 00:25:10.120 ' 00:25:10.120 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:10.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.120 --rc genhtml_branch_coverage=1 00:25:10.120 --rc genhtml_function_coverage=1 00:25:10.120 --rc genhtml_legend=1 00:25:10.120 --rc geninfo_all_blocks=1 00:25:10.120 --rc geninfo_unexecuted_blocks=1 00:25:10.121 00:25:10.121 ' 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:10.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:10.121 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:16.714 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:16.714 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:16.714 Found net devices under 0000:86:00.0: cvl_0_0 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:16.714 Found net devices under 0000:86:00.1: cvl_0_1 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.714 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.715 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.715 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.715 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:16.715 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:16.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:25:16.715 00:25:16.715 --- 10.0.0.2 ping statistics --- 00:25:16.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.715 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:25:16.715 00:25:16.715 --- 10.0.0.1 ping statistics --- 00:25:16.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.715 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2051396 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2051396 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2051396 ']' 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.715 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.997 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.997 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:16.997 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:16.997 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:16.997 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0aa4ff0c356579169c0a0330dcb3eaed 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3po 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0aa4ff0c356579169c0a0330dcb3eaed 0 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0aa4ff0c356579169c0a0330dcb3eaed 0 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0aa4ff0c356579169c0a0330dcb3eaed 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3po 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3po 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.3po 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.997 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c87e09b1a59ff0cea2935b66f1af5add95493c01f29980c2749f63b7e4d424ea 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.JAo 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c87e09b1a59ff0cea2935b66f1af5add95493c01f29980c2749f63b7e4d424ea 3 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c87e09b1a59ff0cea2935b66f1af5add95493c01f29980c2749f63b7e4d424ea 3 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c87e09b1a59ff0cea2935b66f1af5add95493c01f29980c2749f63b7e4d424ea 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.JAo 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.JAo 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.JAo 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e2e978c541b9452e4333fcea5fab278244070c69c66b25b3 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5UK 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e2e978c541b9452e4333fcea5fab278244070c69c66b25b3 0 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e2e978c541b9452e4333fcea5fab278244070c69c66b25b3 0 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e2e978c541b9452e4333fcea5fab278244070c69c66b25b3 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5UK 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5UK 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.5UK 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2b0a5014990527da0d13a998aa798ba70019ff3b11da047c 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.L1Y 00:25:16.998 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2b0a5014990527da0d13a998aa798ba70019ff3b11da047c 2 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2b0a5014990527da0d13a998aa798ba70019ff3b11da047c 2 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2b0a5014990527da0d13a998aa798ba70019ff3b11da047c 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.L1Y 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.L1Y 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.L1Y 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7d2bc5d2a53ca36fa2aa5c7bfeb2ccb6 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:17.292 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.cIz 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7d2bc5d2a53ca36fa2aa5c7bfeb2ccb6 1 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7d2bc5d2a53ca36fa2aa5c7bfeb2ccb6 1 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7d2bc5d2a53ca36fa2aa5c7bfeb2ccb6 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.cIz 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.cIz 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.cIz 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0386f5555fda8ee574e4b435cdf6aedd 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.nEN 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0386f5555fda8ee574e4b435cdf6aedd 1 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0386f5555fda8ee574e4b435cdf6aedd 1 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0386f5555fda8ee574e4b435cdf6aedd 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.nEN 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.nEN 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.nEN 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9ade306142229f65fed5f931cd276105ee96655b4ed8f7c7 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.wcA 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9ade306142229f65fed5f931cd276105ee96655b4ed8f7c7 2 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9ade306142229f65fed5f931cd276105ee96655b4ed8f7c7 2 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9ade306142229f65fed5f931cd276105ee96655b4ed8f7c7 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.wcA 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.wcA 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.wcA 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7d568f8628e6736020fc1288fde1483b 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.aRy 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7d568f8628e6736020fc1288fde1483b 0 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7d568f8628e6736020fc1288fde1483b 0 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7d568f8628e6736020fc1288fde1483b 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:17.293 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.aRy 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.aRy 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.aRy 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=91ec2568b9c2f66875b62b27b2cfa68accd3aa7c1e7264892630b4f3b4dd9e6d 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.q1r 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 91ec2568b9c2f66875b62b27b2cfa68accd3aa7c1e7264892630b4f3b4dd9e6d 3 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 91ec2568b9c2f66875b62b27b2cfa68accd3aa7c1e7264892630b4f3b4dd9e6d 3 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=91ec2568b9c2f66875b62b27b2cfa68accd3aa7c1e7264892630b4f3b4dd9e6d 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.q1r 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.q1r 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.q1r 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2051396 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2051396 ']' 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3po 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.JAo ]] 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JAo 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.5UK 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.L1Y ]] 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L1Y 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.cIz 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.nEN ]] 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nEN 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.wcA 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.589 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.848 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.848 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.aRy ]] 00:25:17.848 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.aRy 00:25:17.848 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.q1r 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:17.849 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:20.382 Waiting for block devices as requested 00:25:20.382 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:20.639 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:20.639 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:20.639 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:20.925 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:20.925 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:20.925 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:20.925 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:21.182 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:21.182 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:21.182 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:21.182 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:21.440 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:21.440 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:21.440 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:21.698 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:21.698 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:22.264 No valid GPT data, bailing 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:22.264 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:22.524 00:25:22.524 Discovery Log Number of Records 2, Generation counter 2 00:25:22.524 =====Discovery Log Entry 0====== 00:25:22.524 trtype: tcp 00:25:22.524 adrfam: ipv4 00:25:22.524 subtype: current discovery subsystem 00:25:22.524 treq: not specified, sq flow control disable supported 00:25:22.524 portid: 1 00:25:22.524 trsvcid: 4420 00:25:22.524 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:22.524 traddr: 10.0.0.1 00:25:22.524 eflags: none 00:25:22.524 sectype: none 00:25:22.524 =====Discovery Log Entry 1====== 00:25:22.524 trtype: tcp 00:25:22.524 adrfam: ipv4 00:25:22.524 subtype: nvme subsystem 00:25:22.524 treq: not specified, sq flow control disable supported 00:25:22.524 portid: 1 00:25:22.524 trsvcid: 4420 00:25:22.524 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:22.524 traddr: 10.0.0.1 00:25:22.524 eflags: none 00:25:22.524 sectype: none 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.524 nvme0n1 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.524 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.525 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.525 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.525 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.525 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.525 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.525 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.784 nvme0n1 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.784 16:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.043 nvme0n1 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:23.043 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.044 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.303 nvme0n1 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.303 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.563 nvme0n1 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.563 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.822 nvme0n1 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:23.822 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.823 16:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.082 nvme0n1 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.082 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.341 nvme0n1 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.341 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.601 nvme0n1 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.601 nvme0n1 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.601 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.860 16:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.860 nvme0n1 00:25:24.860 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.860 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.860 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.860 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.860 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.860 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.119 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.119 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.119 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.119 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.119 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.119 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.119 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.119 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:25.119 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.119 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.120 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.379 nvme0n1 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.379 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.638 nvme0n1 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.639 16:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.898 nvme0n1 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:25.898 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.899 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.157 nvme0n1 00:25:26.157 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.157 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.157 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.157 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.157 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.157 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.417 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.676 nvme0n1 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.676 16:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.935 nvme0n1 00:25:26.935 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.935 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.935 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.935 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.935 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.935 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.935 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.935 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.935 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.935 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.194 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.195 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.195 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.195 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.195 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.195 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.195 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.195 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.195 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.195 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.454 nvme0n1 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.454 16:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.022 nvme0n1 00:25:28.022 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.022 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.022 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.022 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.022 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.022 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.022 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.022 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.022 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.022 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.023 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.281 nvme0n1 00:25:28.281 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.281 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.281 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.281 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.281 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.281 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.281 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.281 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.281 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.281 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.540 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.799 nvme0n1 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:28.799 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.800 16:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.368 nvme0n1 00:25:29.368 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.368 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.368 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.368 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.368 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.368 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.627 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.627 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.627 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.627 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.627 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.627 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.627 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:29.627 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.627 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.628 16:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.195 nvme0n1 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.195 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.761 nvme0n1 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.761 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.762 16:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.328 nvme0n1 00:25:31.328 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.328 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.328 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.328 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.328 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.597 16:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.165 nvme0n1 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.165 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.166 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.424 nvme0n1 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.424 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.682 nvme0n1 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:32.682 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.683 nvme0n1 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.683 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.941 16:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.941 nvme0n1 00:25:32.941 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.941 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.941 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.941 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.941 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.941 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.941 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.941 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.941 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.941 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.941 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.941 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.941 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:32.942 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.942 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.942 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.942 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.942 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:32.942 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.942 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.942 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.200 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.200 nvme0n1 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.201 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.459 nvme0n1 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.459 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.460 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.718 nvme0n1 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.718 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.719 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.719 16:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.977 nvme0n1 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.977 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.236 nvme0n1 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.236 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.495 nvme0n1 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.495 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.753 nvme0n1 00:25:34.753 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.753 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.753 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.753 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.753 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.753 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.753 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.753 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.011 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.011 16:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.011 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.270 nvme0n1 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.270 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.528 nvme0n1 00:25:35.528 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.528 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.528 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.528 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.528 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.528 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.528 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.528 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.528 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.528 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.528 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.529 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.787 nvme0n1 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.787 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.788 16:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.046 nvme0n1 00:25:36.046 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.046 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.046 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.046 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.046 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.046 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:36.303 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.304 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.562 nvme0n1 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.562 16:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.128 nvme0n1 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.128 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.129 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.387 nvme0n1 00:25:37.387 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.387 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.645 16:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.903 nvme0n1 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.903 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.904 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.904 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.904 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.904 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.160 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.416 nvme0n1 00:25:38.416 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.416 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.416 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.416 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.416 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.416 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.417 16:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.982 nvme0n1 00:25:38.982 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.982 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.982 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.982 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.982 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.982 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.241 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.807 nvme0n1 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:39.807 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.808 16:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.375 nvme0n1 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.375 16:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.940 nvme0n1 00:25:40.940 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.940 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.940 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.940 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.941 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.941 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.941 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.941 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.941 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.941 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.198 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.199 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.765 nvme0n1 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.765 16:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.023 nvme0n1 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.023 nvme0n1 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.023 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.281 nvme0n1 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.281 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.282 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.282 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.282 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.282 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.541 nvme0n1 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.541 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.542 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.800 nvme0n1 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.800 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.801 16:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.059 nvme0n1 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.059 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.060 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.318 nvme0n1 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.318 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.576 nvme0n1 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.577 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.836 nvme0n1 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.836 16:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.836 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.094 nvme0n1 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:44.094 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.095 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.353 nvme0n1 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.353 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.611 nvme0n1 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.611 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.869 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.870 16:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 nvme0n1 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.128 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.386 nvme0n1 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.386 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.387 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.645 nvme0n1 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.645 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.903 16:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.161 nvme0n1 00:25:46.161 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.161 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.161 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.161 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.161 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.161 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.161 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.161 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.161 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.161 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.161 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.162 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.728 nvme0n1 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.728 16:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.986 nvme0n1 00:25:46.986 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.986 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.986 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.986 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.986 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.986 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.986 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.986 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.986 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.986 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.244 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.244 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.245 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.503 nvme0n1 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.503 16:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.076 nvme0n1 00:25:48.076 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.076 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.076 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.076 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.076 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.076 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFhNGZmMGMzNTY1NzkxNjljMGEwMzMwZGNiM2VhZWQDp5H5: 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: ]] 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3ZTA5YjFhNTlmZjBjZWEyOTM1YjY2ZjFhZjVhZGQ5NTQ5M2MwMWYyOTk4MGMyNzQ5ZjYzYjdlNGQ0MjRlYVlvLgQ=: 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.077 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.644 nvme0n1 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.644 16:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.211 nvme0n1 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:49.211 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.470 16:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.037 nvme0n1 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFkZTMwNjE0MjIyOWY2NWZlZDVmOTMxY2QyNzYxMDVlZTk2NjU1YjRlZDhmN2M3q6BqaA==: 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: ]] 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q1NjhmODYyOGU2NzM2MDIwZmMxMjg4ZmRlMTQ4M2JRnggr: 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.037 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.604 nvme0n1 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFlYzI1NjhiOWMyZjY2ODc1YjYyYjI3YjJjZmE2OGFjY2QzYWE3YzFlNzI2NDg5MjYzMGI0ZjNiNGRkOWU2ZMW+w1c=: 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.604 16:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.170 nvme0n1 00:25:51.170 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.170 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.170 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.170 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.170 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.170 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.170 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.170 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.170 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.170 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.429 request: 00:25:51.429 { 00:25:51.429 "name": "nvme0", 00:25:51.429 "trtype": "tcp", 00:25:51.429 "traddr": "10.0.0.1", 00:25:51.429 "adrfam": "ipv4", 00:25:51.429 "trsvcid": "4420", 00:25:51.429 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:51.429 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:51.429 "prchk_reftag": false, 00:25:51.429 "prchk_guard": false, 00:25:51.429 "hdgst": false, 00:25:51.429 "ddgst": false, 00:25:51.429 "allow_unrecognized_csi": false, 00:25:51.429 "method": "bdev_nvme_attach_controller", 00:25:51.429 "req_id": 1 00:25:51.429 } 00:25:51.429 Got JSON-RPC error response 00:25:51.429 response: 00:25:51.429 { 00:25:51.429 "code": -5, 00:25:51.429 "message": "Input/output error" 00:25:51.429 } 00:25:51.429 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.430 request: 00:25:51.430 { 00:25:51.430 "name": "nvme0", 00:25:51.430 "trtype": "tcp", 00:25:51.430 "traddr": "10.0.0.1", 00:25:51.430 "adrfam": "ipv4", 00:25:51.430 "trsvcid": "4420", 00:25:51.430 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:51.430 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:51.430 "prchk_reftag": false, 00:25:51.430 "prchk_guard": false, 00:25:51.430 "hdgst": false, 00:25:51.430 "ddgst": false, 00:25:51.430 "dhchap_key": "key2", 00:25:51.430 "allow_unrecognized_csi": false, 00:25:51.430 "method": "bdev_nvme_attach_controller", 00:25:51.430 "req_id": 1 00:25:51.430 } 00:25:51.430 Got JSON-RPC error response 00:25:51.430 response: 00:25:51.430 { 00:25:51.430 "code": -5, 00:25:51.430 "message": "Input/output error" 00:25:51.430 } 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.430 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.689 request: 00:25:51.689 { 00:25:51.689 "name": "nvme0", 00:25:51.689 "trtype": "tcp", 00:25:51.689 "traddr": "10.0.0.1", 00:25:51.689 "adrfam": "ipv4", 00:25:51.689 "trsvcid": "4420", 00:25:51.689 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:51.689 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:51.689 "prchk_reftag": false, 00:25:51.689 "prchk_guard": false, 00:25:51.689 "hdgst": false, 00:25:51.689 "ddgst": false, 00:25:51.689 "dhchap_key": "key1", 00:25:51.689 "dhchap_ctrlr_key": "ckey2", 00:25:51.689 "allow_unrecognized_csi": false, 00:25:51.689 "method": "bdev_nvme_attach_controller", 00:25:51.689 "req_id": 1 00:25:51.689 } 00:25:51.689 Got JSON-RPC error response 00:25:51.689 response: 00:25:51.689 { 00:25:51.689 "code": -5, 00:25:51.689 "message": "Input/output error" 00:25:51.689 } 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.689 nvme0n1 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.689 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.947 16:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.947 request: 00:25:51.947 { 00:25:51.947 "name": "nvme0", 00:25:51.948 "dhchap_key": "key1", 00:25:51.948 "dhchap_ctrlr_key": "ckey2", 00:25:51.948 "method": "bdev_nvme_set_keys", 00:25:51.948 "req_id": 1 00:25:51.948 } 00:25:51.948 Got JSON-RPC error response 00:25:51.948 response: 00:25:51.948 { 00:25:51.948 "code": -13, 00:25:51.948 "message": "Permission denied" 00:25:51.948 } 00:25:51.948 16:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:51.948 16:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:51.948 16:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:51.948 16:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:51.948 16:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:51.948 16:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.948 16:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:51.948 16:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.948 16:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.948 16:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.948 16:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:51.948 16:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:52.881 16:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.881 16:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:52.881 16:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.139 16:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.139 16:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.139 16:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:53.139 16:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlOTc4YzU0MWI5NDUyZTQzMzNmY2VhNWZhYjI3ODI0NDA3MGM2OWM2NmIyNWIzIGGNSg==: 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: ]] 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmIwYTUwMTQ5OTA1MjdkYTBkMTNhOTk4YWE3OThiYTcwMDE5ZmYzYjExZGEwNDdjU3/gFA==: 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.267 nvme0n1 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2QyYmM1ZDJhNTNjYTM2ZmEyYWE1YzdiZmViMmNjYjbwWEgF: 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: ]] 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4NmY1NTU1ZmRhOGVlNTc0ZTRiNDM1Y2RmNmFlZGTYz8q1: 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.267 request: 00:25:54.267 { 00:25:54.267 "name": "nvme0", 00:25:54.267 "dhchap_key": "key2", 00:25:54.267 "dhchap_ctrlr_key": "ckey1", 00:25:54.267 "method": "bdev_nvme_set_keys", 00:25:54.267 "req_id": 1 00:25:54.267 } 00:25:54.267 Got JSON-RPC error response 00:25:54.267 response: 00:25:54.267 { 00:25:54.267 "code": -13, 00:25:54.267 "message": "Permission denied" 00:25:54.267 } 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.267 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.525 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:54.525 16:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:55.458 rmmod nvme_tcp 00:25:55.458 rmmod nvme_fabrics 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2051396 ']' 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2051396 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2051396 ']' 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2051396 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2051396 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2051396' 00:25:55.458 killing process with pid 2051396 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2051396 00:25:55.458 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2051396 00:25:55.717 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:55.717 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:55.717 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:55.717 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:55.717 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:55.717 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:55.717 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:55.717 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:55.717 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:55.717 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.717 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.717 16:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.622 16:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:57.622 16:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:57.622 16:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:57.622 16:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:57.622 16:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:57.622 16:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:57.622 16:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:57.622 16:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:57.622 16:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:57.881 16:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:57.881 16:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:57.881 16:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:57.881 16:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:01.170 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:01.170 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:02.111 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:02.370 16:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.3po /tmp/spdk.key-null.5UK /tmp/spdk.key-sha256.cIz /tmp/spdk.key-sha384.wcA /tmp/spdk.key-sha512.q1r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:02.370 16:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:04.906 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:04.906 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:04.906 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:05.165 00:26:05.165 real 0m55.268s 00:26:05.165 user 0m49.591s 00:26:05.165 sys 0m12.705s 00:26:05.165 16:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.165 16:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.165 ************************************ 00:26:05.165 END TEST nvmf_auth_host 00:26:05.165 ************************************ 00:26:05.165 16:27:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:05.165 16:27:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:05.165 16:27:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:05.165 16:27:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.165 16:27:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.165 ************************************ 00:26:05.165 START TEST nvmf_digest 00:26:05.165 ************************************ 00:26:05.165 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:05.165 * Looking for test storage... 00:26:05.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.165 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:05.165 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:05.165 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:05.424 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:05.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.425 --rc genhtml_branch_coverage=1 00:26:05.425 --rc genhtml_function_coverage=1 00:26:05.425 --rc genhtml_legend=1 00:26:05.425 --rc geninfo_all_blocks=1 00:26:05.425 --rc geninfo_unexecuted_blocks=1 00:26:05.425 00:26:05.425 ' 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:05.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.425 --rc genhtml_branch_coverage=1 00:26:05.425 --rc genhtml_function_coverage=1 00:26:05.425 --rc genhtml_legend=1 00:26:05.425 --rc geninfo_all_blocks=1 00:26:05.425 --rc geninfo_unexecuted_blocks=1 00:26:05.425 00:26:05.425 ' 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:05.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.425 --rc genhtml_branch_coverage=1 00:26:05.425 --rc genhtml_function_coverage=1 00:26:05.425 --rc genhtml_legend=1 00:26:05.425 --rc geninfo_all_blocks=1 00:26:05.425 --rc geninfo_unexecuted_blocks=1 00:26:05.425 00:26:05.425 ' 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:05.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.425 --rc genhtml_branch_coverage=1 00:26:05.425 --rc genhtml_function_coverage=1 00:26:05.425 --rc genhtml_legend=1 00:26:05.425 --rc geninfo_all_blocks=1 00:26:05.425 --rc geninfo_unexecuted_blocks=1 00:26:05.425 00:26:05.425 ' 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:05.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:05.425 16:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:11.996 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:11.997 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:11.997 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:11.997 Found net devices under 0000:86:00.0: cvl_0_0 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:11.997 Found net devices under 0000:86:00.1: cvl_0_1 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:11.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:26:11.997 00:26:11.997 --- 10.0.0.2 ping statistics --- 00:26:11.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.997 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:11.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:26:11.997 00:26:11.997 --- 10.0.0.1 ping statistics --- 00:26:11.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.997 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:11.997 ************************************ 00:26:11.997 START TEST nvmf_digest_clean 00:26:11.997 ************************************ 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2065908 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2065908 00:26:11.997 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2065908 ']' 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.998 [2024-11-20 16:27:42.498158] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:11.998 [2024-11-20 16:27:42.498200] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.998 [2024-11-20 16:27:42.578363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.998 [2024-11-20 16:27:42.618762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.998 [2024-11-20 16:27:42.618797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.998 [2024-11-20 16:27:42.618804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.998 [2024-11-20 16:27:42.618810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.998 [2024-11-20 16:27:42.618816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.998 [2024-11-20 16:27:42.619352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.998 null0 00:26:11.998 [2024-11-20 16:27:42.767758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.998 [2024-11-20 16:27:42.791945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2065934 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2065934 /var/tmp/bperf.sock 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2065934 ']' 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:11.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.998 [2024-11-20 16:27:42.844734] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:11.998 [2024-11-20 16:27:42.844776] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2065934 ] 00:26:11.998 [2024-11-20 16:27:42.921375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.998 [2024-11-20 16:27:42.963591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:11.998 16:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:12.256 16:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.256 16:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.514 nvme0n1 00:26:12.515 16:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:12.515 16:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:12.773 Running I/O for 2 seconds... 00:26:14.643 26116.00 IOPS, 102.02 MiB/s [2024-11-20T15:27:45.877Z] 25961.00 IOPS, 101.41 MiB/s 00:26:14.643 Latency(us) 00:26:14.643 [2024-11-20T15:27:45.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.643 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:14.643 nvme0n1 : 2.01 25966.61 101.43 0.00 0.00 4924.81 2512.21 11671.65 00:26:14.643 [2024-11-20T15:27:45.877Z] =================================================================================================================== 00:26:14.643 [2024-11-20T15:27:45.877Z] Total : 25966.61 101.43 0.00 0.00 4924.81 2512.21 11671.65 00:26:14.643 { 00:26:14.643 "results": [ 00:26:14.643 { 00:26:14.643 "job": "nvme0n1", 00:26:14.643 "core_mask": "0x2", 00:26:14.643 "workload": "randread", 00:26:14.643 "status": "finished", 00:26:14.643 "queue_depth": 128, 00:26:14.643 "io_size": 4096, 00:26:14.643 "runtime": 2.006192, 00:26:14.643 "iops": 25966.60738354056, 00:26:14.643 "mibps": 101.43206009195531, 00:26:14.643 "io_failed": 0, 00:26:14.643 "io_timeout": 0, 00:26:14.643 "avg_latency_us": 4924.80688338114, 00:26:14.643 "min_latency_us": 2512.213333333333, 00:26:14.643 "max_latency_us": 11671.649523809523 00:26:14.643 } 00:26:14.643 ], 00:26:14.643 "core_count": 1 00:26:14.643 } 00:26:14.643 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:14.643 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:14.643 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:14.643 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:14.643 | select(.opcode=="crc32c") 00:26:14.643 | "\(.module_name) \(.executed)"' 00:26:14.643 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:14.902 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:14.902 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:14.902 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:14.902 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:14.902 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2065934 00:26:14.902 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2065934 ']' 00:26:14.902 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2065934 00:26:14.902 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:14.902 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.902 16:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2065934 00:26:14.902 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:14.902 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:14.902 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2065934' 00:26:14.902 killing process with pid 2065934 00:26:14.902 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2065934 00:26:14.902 Received shutdown signal, test time was about 2.000000 seconds 00:26:14.902 00:26:14.902 Latency(us) 00:26:14.902 [2024-11-20T15:27:46.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.902 [2024-11-20T15:27:46.136Z] =================================================================================================================== 00:26:14.902 [2024-11-20T15:27:46.136Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:14.902 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2065934 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2066411 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2066411 /var/tmp/bperf.sock 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2066411 ']' 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:15.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.160 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:15.160 [2024-11-20 16:27:46.248432] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:15.160 [2024-11-20 16:27:46.248489] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066411 ] 00:26:15.160 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:15.160 Zero copy mechanism will not be used. 00:26:15.160 [2024-11-20 16:27:46.321940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.160 [2024-11-20 16:27:46.364126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.419 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.419 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:15.419 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:15.419 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:15.419 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:15.677 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.677 16:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.935 nvme0n1 00:26:15.935 16:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:15.935 16:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.935 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:15.935 Zero copy mechanism will not be used. 00:26:15.935 Running I/O for 2 seconds... 00:26:18.243 6037.00 IOPS, 754.62 MiB/s [2024-11-20T15:27:49.477Z] 5992.50 IOPS, 749.06 MiB/s 00:26:18.243 Latency(us) 00:26:18.243 [2024-11-20T15:27:49.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.243 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:18.243 nvme0n1 : 2.00 5994.46 749.31 0.00 0.00 2666.36 631.95 7146.54 00:26:18.243 [2024-11-20T15:27:49.477Z] =================================================================================================================== 00:26:18.243 [2024-11-20T15:27:49.477Z] Total : 5994.46 749.31 0.00 0.00 2666.36 631.95 7146.54 00:26:18.243 { 00:26:18.243 "results": [ 00:26:18.243 { 00:26:18.243 "job": "nvme0n1", 00:26:18.243 "core_mask": "0x2", 00:26:18.243 "workload": "randread", 00:26:18.243 "status": "finished", 00:26:18.243 "queue_depth": 16, 00:26:18.243 "io_size": 131072, 00:26:18.243 "runtime": 2.002015, 00:26:18.243 "iops": 5994.460580964678, 00:26:18.243 "mibps": 749.3075726205848, 00:26:18.243 "io_failed": 0, 00:26:18.243 "io_timeout": 0, 00:26:18.243 "avg_latency_us": 2666.36352494435, 00:26:18.243 "min_latency_us": 631.9542857142857, 00:26:18.243 "max_latency_us": 7146.544761904762 00:26:18.243 } 00:26:18.243 ], 00:26:18.243 "core_count": 1 00:26:18.243 } 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:18.243 | select(.opcode=="crc32c") 00:26:18.243 | "\(.module_name) \(.executed)"' 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2066411 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2066411 ']' 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2066411 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2066411 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2066411' 00:26:18.243 killing process with pid 2066411 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2066411 00:26:18.243 Received shutdown signal, test time was about 2.000000 seconds 00:26:18.243 00:26:18.243 Latency(us) 00:26:18.243 [2024-11-20T15:27:49.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.243 [2024-11-20T15:27:49.477Z] =================================================================================================================== 00:26:18.243 [2024-11-20T15:27:49.477Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:18.243 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2066411 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2067093 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2067093 /var/tmp/bperf.sock 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2067093 ']' 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.502 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.502 [2024-11-20 16:27:49.650694] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:18.502 [2024-11-20 16:27:49.650739] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2067093 ] 00:26:18.502 [2024-11-20 16:27:49.723898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.759 [2024-11-20 16:27:49.763937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.759 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.759 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:18.759 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:18.759 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:18.760 16:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:19.017 16:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.017 16:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.276 nvme0n1 00:26:19.276 16:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:19.276 16:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.276 Running I/O for 2 seconds... 00:26:21.585 27022.00 IOPS, 105.55 MiB/s [2024-11-20T15:27:52.819Z] 27175.00 IOPS, 106.15 MiB/s 00:26:21.585 Latency(us) 00:26:21.585 [2024-11-20T15:27:52.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.585 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:21.585 nvme0n1 : 2.01 27176.27 106.16 0.00 0.00 4702.88 3432.84 10173.68 00:26:21.585 [2024-11-20T15:27:52.819Z] =================================================================================================================== 00:26:21.585 [2024-11-20T15:27:52.819Z] Total : 27176.27 106.16 0.00 0.00 4702.88 3432.84 10173.68 00:26:21.585 { 00:26:21.585 "results": [ 00:26:21.585 { 00:26:21.585 "job": "nvme0n1", 00:26:21.585 "core_mask": "0x2", 00:26:21.585 "workload": "randwrite", 00:26:21.585 "status": "finished", 00:26:21.585 "queue_depth": 128, 00:26:21.585 "io_size": 4096, 00:26:21.585 "runtime": 2.005794, 00:26:21.585 "iops": 27176.27034481108, 00:26:21.585 "mibps": 106.15730603441828, 00:26:21.585 "io_failed": 0, 00:26:21.585 "io_timeout": 0, 00:26:21.585 "avg_latency_us": 4702.881130766745, 00:26:21.585 "min_latency_us": 3432.8380952380953, 00:26:21.585 "max_latency_us": 10173.683809523809 00:26:21.585 } 00:26:21.585 ], 00:26:21.585 "core_count": 1 00:26:21.585 } 00:26:21.585 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:21.585 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:21.585 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:21.585 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:21.585 | select(.opcode=="crc32c") 00:26:21.585 | "\(.module_name) \(.executed)"' 00:26:21.585 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:21.585 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:21.585 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:21.585 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:21.585 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:21.585 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2067093 00:26:21.585 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2067093 ']' 00:26:21.585 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2067093 00:26:21.586 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:21.586 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.586 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2067093 00:26:21.586 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:21.586 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:21.586 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2067093' 00:26:21.586 killing process with pid 2067093 00:26:21.586 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2067093 00:26:21.586 Received shutdown signal, test time was about 2.000000 seconds 00:26:21.586 00:26:21.586 Latency(us) 00:26:21.586 [2024-11-20T15:27:52.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.586 [2024-11-20T15:27:52.820Z] =================================================================================================================== 00:26:21.586 [2024-11-20T15:27:52.820Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.586 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2067093 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2067574 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2067574 /var/tmp/bperf.sock 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2067574 ']' 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:21.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.844 16:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.844 [2024-11-20 16:27:52.905712] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:21.844 [2024-11-20 16:27:52.905762] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2067574 ] 00:26:21.844 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:21.844 Zero copy mechanism will not be used. 00:26:21.844 [2024-11-20 16:27:52.979393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.844 [2024-11-20 16:27:53.016194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.844 16:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.844 16:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:21.844 16:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:21.844 16:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:21.844 16:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:22.103 16:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.103 16:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.361 nvme0n1 00:26:22.361 16:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:22.361 16:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:22.619 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:22.619 Zero copy mechanism will not be used. 00:26:22.619 Running I/O for 2 seconds... 00:26:24.489 6317.00 IOPS, 789.62 MiB/s [2024-11-20T15:27:55.723Z] 6597.00 IOPS, 824.62 MiB/s 00:26:24.489 Latency(us) 00:26:24.489 [2024-11-20T15:27:55.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.490 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:24.490 nvme0n1 : 2.00 6595.13 824.39 0.00 0.00 2421.89 1833.45 12795.12 00:26:24.490 [2024-11-20T15:27:55.724Z] =================================================================================================================== 00:26:24.490 [2024-11-20T15:27:55.724Z] Total : 6595.13 824.39 0.00 0.00 2421.89 1833.45 12795.12 00:26:24.490 { 00:26:24.490 "results": [ 00:26:24.490 { 00:26:24.490 "job": "nvme0n1", 00:26:24.490 "core_mask": "0x2", 00:26:24.490 "workload": "randwrite", 00:26:24.490 "status": "finished", 00:26:24.490 "queue_depth": 16, 00:26:24.490 "io_size": 131072, 00:26:24.490 "runtime": 2.003599, 00:26:24.490 "iops": 6595.132059858285, 00:26:24.490 "mibps": 824.3915074822856, 00:26:24.490 "io_failed": 0, 00:26:24.490 "io_timeout": 0, 00:26:24.490 "avg_latency_us": 2421.892144983315, 00:26:24.490 "min_latency_us": 1833.4476190476191, 00:26:24.490 "max_latency_us": 12795.12380952381 00:26:24.490 } 00:26:24.490 ], 00:26:24.490 "core_count": 1 00:26:24.490 } 00:26:24.490 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:24.490 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:24.490 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:24.490 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:24.490 | select(.opcode=="crc32c") 00:26:24.490 | "\(.module_name) \(.executed)"' 00:26:24.490 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2067574 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2067574 ']' 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2067574 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2067574 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2067574' 00:26:24.748 killing process with pid 2067574 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2067574 00:26:24.748 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.748 00:26:24.748 Latency(us) 00:26:24.748 [2024-11-20T15:27:55.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.748 [2024-11-20T15:27:55.982Z] =================================================================================================================== 00:26:24.748 [2024-11-20T15:27:55.982Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.748 16:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2067574 00:26:25.007 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2065908 00:26:25.007 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2065908 ']' 00:26:25.007 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2065908 00:26:25.007 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:25.007 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.007 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2065908 00:26:25.007 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:25.007 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:25.007 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2065908' 00:26:25.007 killing process with pid 2065908 00:26:25.007 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2065908 00:26:25.007 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2065908 00:26:25.266 00:26:25.266 real 0m13.884s 00:26:25.266 user 0m26.446s 00:26:25.266 sys 0m4.641s 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:25.266 ************************************ 00:26:25.266 END TEST nvmf_digest_clean 00:26:25.266 ************************************ 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:25.266 ************************************ 00:26:25.266 START TEST nvmf_digest_error 00:26:25.266 ************************************ 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2068098 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2068098 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2068098 ']' 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.266 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.266 [2024-11-20 16:27:56.453615] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:25.266 [2024-11-20 16:27:56.453657] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.525 [2024-11-20 16:27:56.532384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.525 [2024-11-20 16:27:56.572193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.525 [2024-11-20 16:27:56.572232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.525 [2024-11-20 16:27:56.572239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.525 [2024-11-20 16:27:56.572246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.525 [2024-11-20 16:27:56.572251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.525 [2024-11-20 16:27:56.572803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.525 [2024-11-20 16:27:56.657294] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.525 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.525 null0 00:26:25.525 [2024-11-20 16:27:56.753611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.783 [2024-11-20 16:27:56.777783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.783 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2068307 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2068307 /var/tmp/bperf.sock 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2068307 ']' 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:25.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.784 16:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.784 [2024-11-20 16:27:56.828151] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:25.784 [2024-11-20 16:27:56.828193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2068307 ] 00:26:25.784 [2024-11-20 16:27:56.901762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.784 [2024-11-20 16:27:56.943559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.042 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:26.042 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:26.042 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:26.042 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:26.042 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:26.042 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.042 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:26.042 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.042 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.042 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.300 nvme0n1 00:26:26.300 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:26.300 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.300 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:26.300 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.300 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:26.300 16:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:26.559 Running I/O for 2 seconds... 00:26:26.559 [2024-11-20 16:27:57.579634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.559 [2024-11-20 16:27:57.579667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.559 [2024-11-20 16:27:57.579678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.559 [2024-11-20 16:27:57.591387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.559 [2024-11-20 16:27:57.591413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.559 [2024-11-20 16:27:57.591422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.559 [2024-11-20 16:27:57.600123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.559 [2024-11-20 16:27:57.600147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.559 [2024-11-20 16:27:57.600156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.559 [2024-11-20 16:27:57.609669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.559 [2024-11-20 16:27:57.609691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.559 [2024-11-20 16:27:57.609700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.559 [2024-11-20 16:27:57.619948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.559 [2024-11-20 16:27:57.619970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.559 [2024-11-20 16:27:57.619979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.559 [2024-11-20 16:27:57.628637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.559 [2024-11-20 16:27:57.628659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.559 [2024-11-20 16:27:57.628667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.559 [2024-11-20 16:27:57.637995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.559 [2024-11-20 16:27:57.638015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.559 [2024-11-20 16:27:57.638023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.559 [2024-11-20 16:27:57.647612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.559 [2024-11-20 16:27:57.647634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.559 [2024-11-20 16:27:57.647642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.559 [2024-11-20 16:27:57.656482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.559 [2024-11-20 16:27:57.656503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.559 [2024-11-20 16:27:57.656511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.559 [2024-11-20 16:27:57.665929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.559 [2024-11-20 16:27:57.665951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.560 [2024-11-20 16:27:57.665959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.560 [2024-11-20 16:27:57.675673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.560 [2024-11-20 16:27:57.675694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.560 [2024-11-20 16:27:57.675702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.560 [2024-11-20 16:27:57.685679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.560 [2024-11-20 16:27:57.685701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.560 [2024-11-20 16:27:57.685709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.560 [2024-11-20 16:27:57.693253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.560 [2024-11-20 16:27:57.693274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.560 [2024-11-20 16:27:57.693282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.560 [2024-11-20 16:27:57.703328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.560 [2024-11-20 16:27:57.703349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.560 [2024-11-20 16:27:57.703357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.560 [2024-11-20 16:27:57.713004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.560 [2024-11-20 16:27:57.713025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.560 [2024-11-20 16:27:57.713033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.560 [2024-11-20 16:27:57.720964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.560 [2024-11-20 16:27:57.720985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.560 [2024-11-20 16:27:57.720997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.560 [2024-11-20 16:27:57.732295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.560 [2024-11-20 16:27:57.732316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.560 [2024-11-20 16:27:57.732324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.560 [2024-11-20 16:27:57.744751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.560 [2024-11-20 16:27:57.744772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.560 [2024-11-20 16:27:57.744780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.560 [2024-11-20 16:27:57.753954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.560 [2024-11-20 16:27:57.753974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.560 [2024-11-20 16:27:57.753982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.560 [2024-11-20 16:27:57.762840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.560 [2024-11-20 16:27:57.762862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.560 [2024-11-20 16:27:57.762871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.560 [2024-11-20 16:27:57.773034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.560 [2024-11-20 16:27:57.773055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.560 [2024-11-20 16:27:57.773064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.560 [2024-11-20 16:27:57.782347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.560 [2024-11-20 16:27:57.782367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.560 [2024-11-20 16:27:57.782375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.791110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.791133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.791141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.799989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.800012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.800020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.810262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.810289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.810297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.820312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.820333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.820342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.829851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.829873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.829881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.840218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.840240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.840249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.848817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.848839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.848847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.860196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.860223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.860232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.872328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.872349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.872357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.884678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.884699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.884708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.896746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.896768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.896777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.905539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.905561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.905570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.916657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.916679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.916686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.929141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.929163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.929171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.939519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.939541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.939550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.947272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.818 [2024-11-20 16:27:57.947293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.818 [2024-11-20 16:27:57.947301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.818 [2024-11-20 16:27:57.958197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.819 [2024-11-20 16:27:57.958225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.819 [2024-11-20 16:27:57.958234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.819 [2024-11-20 16:27:57.967005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.819 [2024-11-20 16:27:57.967026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.819 [2024-11-20 16:27:57.967035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.819 [2024-11-20 16:27:57.977565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.819 [2024-11-20 16:27:57.977586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.819 [2024-11-20 16:27:57.977595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.819 [2024-11-20 16:27:57.986816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.819 [2024-11-20 16:27:57.986842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.819 [2024-11-20 16:27:57.986860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.819 [2024-11-20 16:27:57.995096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.819 [2024-11-20 16:27:57.995116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.819 [2024-11-20 16:27:57.995125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.819 [2024-11-20 16:27:58.005809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.819 [2024-11-20 16:27:58.005832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.819 [2024-11-20 16:27:58.005840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.819 [2024-11-20 16:27:58.013966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.819 [2024-11-20 16:27:58.013988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.819 [2024-11-20 16:27:58.013996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.819 [2024-11-20 16:27:58.024984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.819 [2024-11-20 16:27:58.025006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.819 [2024-11-20 16:27:58.025014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.819 [2024-11-20 16:27:58.036582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.819 [2024-11-20 16:27:58.036602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.819 [2024-11-20 16:27:58.036611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.819 [2024-11-20 16:27:58.044471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:26.819 [2024-11-20 16:27:58.044492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.819 [2024-11-20 16:27:58.044500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-11-20 16:27:58.057275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.086 [2024-11-20 16:27:58.057299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-11-20 16:27:58.057309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-11-20 16:27:58.067212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.086 [2024-11-20 16:27:58.067234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-11-20 16:27:58.067243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-11-20 16:27:58.076288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.086 [2024-11-20 16:27:58.076313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-11-20 16:27:58.076322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-11-20 16:27:58.087326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.086 [2024-11-20 16:27:58.087347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-11-20 16:27:58.087355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-11-20 16:27:58.100148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.086 [2024-11-20 16:27:58.100171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-11-20 16:27:58.100179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-11-20 16:27:58.111715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.086 [2024-11-20 16:27:58.111737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-11-20 16:27:58.111745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-11-20 16:27:58.121099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.086 [2024-11-20 16:27:58.121121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-11-20 16:27:58.121129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-11-20 16:27:58.132098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.086 [2024-11-20 16:27:58.132120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-11-20 16:27:58.132128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-11-20 16:27:58.144654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.086 [2024-11-20 16:27:58.144675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-11-20 16:27:58.144684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-11-20 16:27:58.154718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.086 [2024-11-20 16:27:58.154740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-11-20 16:27:58.154748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-11-20 16:27:58.165018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.165039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.165050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.173045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.173066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.173075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.182432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.182453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.182462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.193186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.193212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.193221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.201228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.201248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.201257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.210485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.210506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.210514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.219339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.219359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.219368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.229297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.229317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.229326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.239100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.239120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.239128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.248289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.248312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.248320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.259114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.259135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.259143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.269789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.269809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.269818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.278236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.278257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.278265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.291027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.291049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.291057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.299214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.299236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.299244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.087 [2024-11-20 16:27:58.310362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.087 [2024-11-20 16:27:58.310383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.087 [2024-11-20 16:27:58.310391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.364 [2024-11-20 16:27:58.320110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.364 [2024-11-20 16:27:58.320132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.364 [2024-11-20 16:27:58.320140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.364 [2024-11-20 16:27:58.329724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.364 [2024-11-20 16:27:58.329745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.329753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.341429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.341450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.341458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.352821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.352842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.352851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.360952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.360974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.360982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.373626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.373647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.373655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.385269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.385290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.385298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.395827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.395848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.395856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.403828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.403848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.403856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.414446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.414466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.414474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.424735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.424756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.424767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.435816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.435837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.435845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.444640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.444660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.444668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.454021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.454041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.454049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.462770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.462790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.462798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.472425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.472446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.472454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.482586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.482606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.482614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.493691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.493712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.493720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.502999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.503019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.503027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.514623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.514647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.514656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.525322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.525342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.525351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.533389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.533409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.533418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.542932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.542954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.542962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.552061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.552081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.552090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 25234.00 IOPS, 98.57 MiB/s [2024-11-20T15:27:58.599Z] [2024-11-20 16:27:58.562493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.562512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.562520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.571503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.571524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.571532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.580018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.580039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.580046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.365 [2024-11-20 16:27:58.592215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.365 [2024-11-20 16:27:58.592236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.365 [2024-11-20 16:27:58.592247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.602575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.602596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.602605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.611540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.611561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.611569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.620617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.620636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.620644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.629763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.629783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.629791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.638842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.638862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.638870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.648001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.648022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.648030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.657910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.657930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.657938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.666563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.666583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.666592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.677254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.677279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.677287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.689603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.689622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.689630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.702002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.702024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.702032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.713062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.713084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.713092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.721925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.721945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.721953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.735115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.735136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.735144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.744464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.744484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.744492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.754553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.754574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.754582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.762541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.762562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.762570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.772242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.772264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.625 [2024-11-20 16:27:58.772273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.625 [2024-11-20 16:27:58.781595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.625 [2024-11-20 16:27:58.781616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.626 [2024-11-20 16:27:58.781624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.626 [2024-11-20 16:27:58.790625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.626 [2024-11-20 16:27:58.790646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.626 [2024-11-20 16:27:58.790654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.626 [2024-11-20 16:27:58.799997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.626 [2024-11-20 16:27:58.800019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.626 [2024-11-20 16:27:58.800028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.626 [2024-11-20 16:27:58.809179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.626 [2024-11-20 16:27:58.809199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.626 [2024-11-20 16:27:58.809213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.626 [2024-11-20 16:27:58.818433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.626 [2024-11-20 16:27:58.818453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.626 [2024-11-20 16:27:58.818461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.626 [2024-11-20 16:27:58.828946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.626 [2024-11-20 16:27:58.828967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.626 [2024-11-20 16:27:58.828975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.626 [2024-11-20 16:27:58.837470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.626 [2024-11-20 16:27:58.837491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.626 [2024-11-20 16:27:58.837499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.626 [2024-11-20 16:27:58.848466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.626 [2024-11-20 16:27:58.848487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.626 [2024-11-20 16:27:58.848498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:58.860789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:58.860809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:58.860817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:58.874124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:58.874145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:58.874153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:58.886166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:58.886187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:58.886195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:58.898196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:58.898224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:58.898232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:58.909028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:58.909050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:58.909058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:58.921517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:58.921538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:58.921546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:58.930200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:58.930226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:58.930234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:58.943140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:58.943162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:58.943170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:58.953565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:58.953590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:58.953598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:58.961781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:58.961801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:58.961809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:58.973678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:58.973698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:58.973706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:58.986118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:58.986139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:58.986148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:58.996968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:58.996988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:58.996996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:59.005341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:59.005362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:59.005370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:59.017446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:59.017468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:59.017476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:59.025664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:59.025685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:59.025693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:59.037366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:59.037387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:59.037399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.885 [2024-11-20 16:27:59.049210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.885 [2024-11-20 16:27:59.049230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.885 [2024-11-20 16:27:59.049239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.886 [2024-11-20 16:27:59.057833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.886 [2024-11-20 16:27:59.057853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.886 [2024-11-20 16:27:59.057861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.886 [2024-11-20 16:27:59.069304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.886 [2024-11-20 16:27:59.069325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.886 [2024-11-20 16:27:59.069333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.886 [2024-11-20 16:27:59.077520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.886 [2024-11-20 16:27:59.077549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.886 [2024-11-20 16:27:59.077558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.886 [2024-11-20 16:27:59.088092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.886 [2024-11-20 16:27:59.088113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.886 [2024-11-20 16:27:59.088121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.886 [2024-11-20 16:27:59.097601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.886 [2024-11-20 16:27:59.097622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.886 [2024-11-20 16:27:59.097629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.886 [2024-11-20 16:27:59.106477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:27.886 [2024-11-20 16:27:59.106497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.886 [2024-11-20 16:27:59.106505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.115678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.115699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.115707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.124659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.124684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.124692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.135685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.135707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.135715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.145357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.145378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.145386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.156293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.156314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.156322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.164443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.164463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.164471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.174391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.174411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.174420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.183278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.183299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.183307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.193727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.193748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.193757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.203711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.203733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.203741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.211770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.211792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.211800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.221249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.221270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.221278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.231549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.231570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.231578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.242597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.242620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.242630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.254902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.254924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.254933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.263128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.263149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.263157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.274711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.274733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.274740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.285509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.285531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.285540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.293680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.293701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.293715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.302961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.302983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.302991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.313310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.313332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.313340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.323588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.323609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.323617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.331978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.331999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.332007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.341499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.341520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.341529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.350918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.350939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.350948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.359958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.359980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.359989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.143 [2024-11-20 16:27:59.368644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.143 [2024-11-20 16:27:59.368665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.143 [2024-11-20 16:27:59.368673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.401 [2024-11-20 16:27:59.378441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.401 [2024-11-20 16:27:59.378466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.378475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.387684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.387706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.387714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.397246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.397267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.397276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.406997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.407018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.407027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.416678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.416699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.416707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.425839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.425861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.425870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.435155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.435177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.435186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.443524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.443547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.443555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.452916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.452937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.452948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.462883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.462905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.462913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.472325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.472346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.472354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.482317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.482338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.482346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.491555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.491576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.491583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.500517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.500538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.500546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.510355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.510376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.510385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.520428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.520449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.520457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.528579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.528598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.528606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.538876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.538900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.538908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.551481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.551503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.551511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 [2024-11-20 16:27:59.564923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x191e740) 00:26:28.402 [2024-11-20 16:27:59.564945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.402 [2024-11-20 16:27:59.564953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.402 25359.50 IOPS, 99.06 MiB/s 00:26:28.402 Latency(us) 00:26:28.402 [2024-11-20T15:27:59.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.402 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:28.402 nvme0n1 : 2.00 25383.38 99.15 0.00 0.00 5035.55 2465.40 18225.25 00:26:28.402 [2024-11-20T15:27:59.636Z] =================================================================================================================== 00:26:28.402 [2024-11-20T15:27:59.636Z] Total : 25383.38 99.15 0.00 0.00 5035.55 2465.40 18225.25 00:26:28.402 { 00:26:28.402 "results": [ 00:26:28.402 { 00:26:28.402 "job": "nvme0n1", 00:26:28.402 "core_mask": "0x2", 00:26:28.402 "workload": "randread", 00:26:28.402 "status": "finished", 00:26:28.402 "queue_depth": 128, 00:26:28.402 "io_size": 4096, 00:26:28.402 "runtime": 2.004855, 00:26:28.402 "iops": 25383.381840581987, 00:26:28.403 "mibps": 99.15383531477339, 00:26:28.403 "io_failed": 0, 00:26:28.403 "io_timeout": 0, 00:26:28.403 "avg_latency_us": 5035.553387867389, 00:26:28.403 "min_latency_us": 2465.401904761905, 00:26:28.403 "max_latency_us": 18225.249523809525 00:26:28.403 } 00:26:28.403 ], 00:26:28.403 "core_count": 1 00:26:28.403 } 00:26:28.403 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:28.403 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:28.403 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:28.403 | .driver_specific 00:26:28.403 | .nvme_error 00:26:28.403 | .status_code 00:26:28.403 | .command_transient_transport_error' 00:26:28.403 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:28.661 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 199 > 0 )) 00:26:28.661 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2068307 00:26:28.661 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2068307 ']' 00:26:28.661 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2068307 00:26:28.661 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:28.661 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.661 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2068307 00:26:28.661 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:28.661 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:28.661 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2068307' 00:26:28.661 killing process with pid 2068307 00:26:28.661 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2068307 00:26:28.661 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.661 00:26:28.661 Latency(us) 00:26:28.661 [2024-11-20T15:27:59.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.661 [2024-11-20T15:27:59.895Z] =================================================================================================================== 00:26:28.661 [2024-11-20T15:27:59.895Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.661 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2068307 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2068783 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2068783 /var/tmp/bperf.sock 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2068783 ']' 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.919 16:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.919 [2024-11-20 16:28:00.039880] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:28.919 [2024-11-20 16:28:00.039933] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2068783 ] 00:26:28.919 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:28.919 Zero copy mechanism will not be used. 00:26:28.920 [2024-11-20 16:28:00.116648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.177 [2024-11-20 16:28:00.157340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.177 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.177 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:29.177 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.177 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.436 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:29.436 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.436 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.436 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.436 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.436 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.693 nvme0n1 00:26:29.693 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:29.693 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.693 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.693 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.693 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:29.693 16:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.953 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:29.953 Zero copy mechanism will not be used. 00:26:29.953 Running I/O for 2 seconds... 00:26:29.954 [2024-11-20 16:28:00.971740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:00.971775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:00.971786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:00.977525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:00.977552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:00.977561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:00.983566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:00.983590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:00.983599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:00.989105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:00.989129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:00.989138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:00.994318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:00.994341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:00.994349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:00.999555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:00.999578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:00.999586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.004782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.004804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.004812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.010252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.010274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.010282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.015589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.015612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.015620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.020683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.020705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.020713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.023579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.023601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.023609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.028814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.028836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.028844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.034071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.034092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.034101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.039385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.039408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.039419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.044726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.044748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.044756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.050012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.050035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.050045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.055357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.055381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.055389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.060238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.060260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.060268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.065511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.065535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.065543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.070789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.070812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.070821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.076064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.076087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.076095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.081494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.954 [2024-11-20 16:28:01.081516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.954 [2024-11-20 16:28:01.081525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.954 [2024-11-20 16:28:01.086808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.086831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.086840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.092137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.092160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.092169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.097521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.097544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.097553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.102847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.102870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.102878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.108157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.108179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.108187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.113410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.113431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.113455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.118687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.118709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.118717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.123932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.123954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.123962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.129221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.129243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.129254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.134523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.134545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.134553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.140028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.140052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.140060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.145264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.145287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.145295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.148432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.148455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.148463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.153957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.153978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.153986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.159304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.159325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.159334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.164498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.164520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.164527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.169663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.169684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.169693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.174865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.174892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.174900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.955 [2024-11-20 16:28:01.180149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:29.955 [2024-11-20 16:28:01.180170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.955 [2024-11-20 16:28:01.180178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.215 [2024-11-20 16:28:01.185575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.215 [2024-11-20 16:28:01.185597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.215 [2024-11-20 16:28:01.185605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.215 [2024-11-20 16:28:01.190875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.215 [2024-11-20 16:28:01.190896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.215 [2024-11-20 16:28:01.190904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.215 [2024-11-20 16:28:01.196083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.215 [2024-11-20 16:28:01.196104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.215 [2024-11-20 16:28:01.196112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.215 [2024-11-20 16:28:01.201297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.201318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.201326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.206539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.206561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.206570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.211744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.211766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.211775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.217062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.217084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.217093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.222391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.222413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.222422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.227681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.227702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.227710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.232889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.232911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.232919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.238163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.238185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.238193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.243410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.243432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.243440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.248595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.248617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.248625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.253848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.253870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.253878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.259049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.259071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.259078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.264379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.264400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.264412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.269680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.269702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.269709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.274885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.274907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.274915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.280113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.280134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.280142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.285379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.285399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.285407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.290552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.290574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.290582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.295744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.295765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.295773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.300958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.300979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.300987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.306121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.306143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.306151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.311368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.311392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.311400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.316532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.316552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.316560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.321670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.216 [2024-11-20 16:28:01.321691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.216 [2024-11-20 16:28:01.321699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.216 [2024-11-20 16:28:01.326820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.326842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.326850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.332007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.332028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.332036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.337247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.337268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.337276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.342502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.342523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.342531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.347670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.347690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.347698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.352864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.352886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.352897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.358073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.358094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.358102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.363372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.363394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.363402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.368657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.368679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.368688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.373903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.373925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.373933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.379178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.379200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.379215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.384374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.384395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.384403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.389600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.389621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.389630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.394807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.394829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.394837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.400034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.400058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.400066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.405081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.405102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.405110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.410294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.410315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.410324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.415455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.415477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.415485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.420685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.420707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.420715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.425868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.425890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.425899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.431094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.431116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.431124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.436180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.436200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.436214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.217 [2024-11-20 16:28:01.441501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.217 [2024-11-20 16:28:01.441523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.217 [2024-11-20 16:28:01.441532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.446853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.446875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.446883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.451973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.451995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.452003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.457254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.457275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.457283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.462437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.462458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.462466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.467605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.467626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.467634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.472833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.472854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.472862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.478033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.478055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.478063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.483328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.483349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.483357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.488580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.488602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.488614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.493793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.493814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.493822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.499023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.499045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.499053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.504195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.504223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.504231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.509387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.509408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.509416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.514621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.514642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.514649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.519867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.519888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.519896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.524953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.524975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.524983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.530174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.530195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.530208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.535321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.535345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.535353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.540478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.540499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.540507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.545708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.545729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.545737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.550914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.550936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.550944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.556155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.556177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.478 [2024-11-20 16:28:01.556185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.478 [2024-11-20 16:28:01.561380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.478 [2024-11-20 16:28:01.561401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.561410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.566625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.566645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.566653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.571825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.571847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.571854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.577249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.577271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.577280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.582768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.582790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.582798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.588279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.588301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.588309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.593879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.593901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.593909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.599239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.599260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.599268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.604736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.604758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.604766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.610064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.610085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.610093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.615452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.615473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.615481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.620676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.620698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.620706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.626152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.626174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.626192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.631364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.631386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.631394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.636746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.636767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.636776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.642069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.642090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.642098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.647915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.647937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.647946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.653502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.653523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.653531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.658725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.658747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.658755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.664354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.664375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.664383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.669632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.669654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.669662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.675014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.675035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.675044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.479 [2024-11-20 16:28:01.680493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.479 [2024-11-20 16:28:01.680515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.479 [2024-11-20 16:28:01.680524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.480 [2024-11-20 16:28:01.685970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.480 [2024-11-20 16:28:01.685992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-11-20 16:28:01.686000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.480 [2024-11-20 16:28:01.691368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.480 [2024-11-20 16:28:01.691389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-11-20 16:28:01.691397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.480 [2024-11-20 16:28:01.696819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.480 [2024-11-20 16:28:01.696841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-11-20 16:28:01.696849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.480 [2024-11-20 16:28:01.702386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.480 [2024-11-20 16:28:01.702407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-11-20 16:28:01.702415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.708005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.708028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.708036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.713483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.713506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.713514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.718980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.719002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.719014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.724295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.724317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.724325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.729694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.729716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.729724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.735089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.735113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.735124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.740413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.740435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.740443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.746019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.746041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.746050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.751491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.751514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.751522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.756891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.756913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.756921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.762279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.762301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.762309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.767839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.767864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.767872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.773197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.773224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.773233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.778523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.778546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.778554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.783897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.783918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.783926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.789297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.789319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.789327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.794600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.794621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.794629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.740 [2024-11-20 16:28:01.799918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.740 [2024-11-20 16:28:01.799939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.740 [2024-11-20 16:28:01.799947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.805254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.805275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.805283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.810540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.810561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.810569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.816054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.816076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.816084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.821357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.821378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.821386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.826798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.826820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.826829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.832036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.832057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.832065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.837314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.837335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.837344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.842645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.842667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.842675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.847919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.847941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.847949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.853446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.853468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.853475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.858791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.858812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.858826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.863975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.863996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.864004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.869111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.869133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.869141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.874366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.874388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.874396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.879474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.879495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.879504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.884790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.884813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.884821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.890992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.891014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.891022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.898308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.898330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.898339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.905805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.905827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.905836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.912905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.912931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.912939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.741 [2024-11-20 16:28:01.920330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.741 [2024-11-20 16:28:01.920352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.741 [2024-11-20 16:28:01.920360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.742 [2024-11-20 16:28:01.927686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.742 [2024-11-20 16:28:01.927708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.742 [2024-11-20 16:28:01.927717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.742 [2024-11-20 16:28:01.935528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.742 [2024-11-20 16:28:01.935550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.742 [2024-11-20 16:28:01.935559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.742 [2024-11-20 16:28:01.943267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.742 [2024-11-20 16:28:01.943289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.742 [2024-11-20 16:28:01.943297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.742 [2024-11-20 16:28:01.950590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.742 [2024-11-20 16:28:01.950613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.742 [2024-11-20 16:28:01.950621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.742 [2024-11-20 16:28:01.956625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.742 [2024-11-20 16:28:01.956647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.742 [2024-11-20 16:28:01.956655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.742 [2024-11-20 16:28:01.962920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.742 [2024-11-20 16:28:01.962942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.742 [2024-11-20 16:28:01.962951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.742 5722.00 IOPS, 715.25 MiB/s [2024-11-20T15:28:01.976Z] [2024-11-20 16:28:01.969527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:30.742 [2024-11-20 16:28:01.969550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.742 [2024-11-20 16:28:01.969562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:01.975138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:01.975160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:01.975168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:01.980401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:01.980423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:01.980431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:01.985891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:01.985914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:01.985922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:01.991355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:01.991376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:01.991384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:01.996840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:01.996861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:01.996869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:02.002148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:02.002170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:02.002178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:02.007530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:02.007553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:02.007560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:02.012977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:02.012999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:02.013007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:02.018477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:02.018503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:02.018510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:02.023856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:02.023878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:02.023886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:02.029238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:02.029260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:02.029268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:02.034862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:02.034884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:02.034893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:02.040241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:02.040263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:02.040272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:02.045620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:02.045642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:02.045650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:02.051099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:02.051122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:02.051130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:02.056368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.002 [2024-11-20 16:28:02.056391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.002 [2024-11-20 16:28:02.056399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.002 [2024-11-20 16:28:02.061591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.061615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.061623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.066950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.066974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.066982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.071576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.071598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.071607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.076531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.076553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.076561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.081554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.081577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.081585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.086739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.086764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.086773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.091944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.091966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.091975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.097110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.097134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.097142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.102309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.102330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.102339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.107525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.107549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.107561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.113061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.113084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.113093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.118649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.118673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.118682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.124155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.124177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.124185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.129746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.129769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.129777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.135597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.135621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.135629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.140930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.140953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.140962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.146677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.146701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.146710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.151942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.151966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.151974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.157262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.157288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.157296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.162692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.162714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.162723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.167965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.167987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.167996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.173363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.173387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.173396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.178362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.178385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.178394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.183623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.183646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.183655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.188684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.188706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.188714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.193859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.193880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.193888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.003 [2024-11-20 16:28:02.199081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.003 [2024-11-20 16:28:02.199103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.003 [2024-11-20 16:28:02.199111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.004 [2024-11-20 16:28:02.204298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.004 [2024-11-20 16:28:02.204320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.004 [2024-11-20 16:28:02.204328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.004 [2024-11-20 16:28:02.209477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.004 [2024-11-20 16:28:02.209498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.004 [2024-11-20 16:28:02.209506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.004 [2024-11-20 16:28:02.214649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.004 [2024-11-20 16:28:02.214671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.004 [2024-11-20 16:28:02.214679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.004 [2024-11-20 16:28:02.219975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.004 [2024-11-20 16:28:02.219997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.004 [2024-11-20 16:28:02.220004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.004 [2024-11-20 16:28:02.225356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.004 [2024-11-20 16:28:02.225377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.004 [2024-11-20 16:28:02.225385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.004 [2024-11-20 16:28:02.230802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.004 [2024-11-20 16:28:02.230825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.004 [2024-11-20 16:28:02.230833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.264 [2024-11-20 16:28:02.236332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.264 [2024-11-20 16:28:02.236355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.264 [2024-11-20 16:28:02.236364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.264 [2024-11-20 16:28:02.241795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.264 [2024-11-20 16:28:02.241817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.264 [2024-11-20 16:28:02.241825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.264 [2024-11-20 16:28:02.247221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.264 [2024-11-20 16:28:02.247247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.264 [2024-11-20 16:28:02.247254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.264 [2024-11-20 16:28:02.252694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.264 [2024-11-20 16:28:02.252715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.264 [2024-11-20 16:28:02.252723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.264 [2024-11-20 16:28:02.257975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.264 [2024-11-20 16:28:02.257998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.264 [2024-11-20 16:28:02.258007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.264 [2024-11-20 16:28:02.263213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.264 [2024-11-20 16:28:02.263235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.264 [2024-11-20 16:28:02.263243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.264 [2024-11-20 16:28:02.268494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.264 [2024-11-20 16:28:02.268516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.264 [2024-11-20 16:28:02.268524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.264 [2024-11-20 16:28:02.273631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.264 [2024-11-20 16:28:02.273653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.264 [2024-11-20 16:28:02.273661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.264 [2024-11-20 16:28:02.278837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.264 [2024-11-20 16:28:02.278859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.264 [2024-11-20 16:28:02.278867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.264 [2024-11-20 16:28:02.284055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.264 [2024-11-20 16:28:02.284078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.264 [2024-11-20 16:28:02.284086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.264 [2024-11-20 16:28:02.289219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.264 [2024-11-20 16:28:02.289240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.264 [2024-11-20 16:28:02.289248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.264 [2024-11-20 16:28:02.294525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.264 [2024-11-20 16:28:02.294546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.264 [2024-11-20 16:28:02.294553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.299824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.299845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.299853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.305189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.305217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.305225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.310467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.310489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.310497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.315835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.315856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.315864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.321128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.321150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.321159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.326567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.326589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.326597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.331920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.331943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.331950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.337751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.337774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.337785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.343072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.343093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.343101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.348580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.348603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.348612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.354169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.354191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.354200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.359569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.359591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.359599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.364972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.364994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.365003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.370262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.370284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.370292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.375623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.375645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.375653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.381000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.381022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.381030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.386386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.386412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.386420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.391747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.391768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.391776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.397282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.397305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.397312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.402602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.402625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.402633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.407999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.408021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.408029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.413354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.413376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.413384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.265 [2024-11-20 16:28:02.418708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.265 [2024-11-20 16:28:02.418730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.265 [2024-11-20 16:28:02.418738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.424177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.424198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.424213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.429686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.429708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.429716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.434868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.434890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.434898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.440055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.440078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.440086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.445326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.445347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.445355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.450454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.450476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.450484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.455557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.455579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.455586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.460674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.460696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.460704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.465808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.465830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.465838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.469279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.469300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.469309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.473350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.473372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.473383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.478506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.478528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.478536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.483858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.483880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.483888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.266 [2024-11-20 16:28:02.488934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.266 [2024-11-20 16:28:02.488957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.266 [2024-11-20 16:28:02.488965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.526 [2024-11-20 16:28:02.494253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.526 [2024-11-20 16:28:02.494275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.526 [2024-11-20 16:28:02.494283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.526 [2024-11-20 16:28:02.499468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.526 [2024-11-20 16:28:02.499490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.526 [2024-11-20 16:28:02.499499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.526 [2024-11-20 16:28:02.504893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.526 [2024-11-20 16:28:02.504915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.526 [2024-11-20 16:28:02.504923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.526 [2024-11-20 16:28:02.510400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.526 [2024-11-20 16:28:02.510423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.526 [2024-11-20 16:28:02.510432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.526 [2024-11-20 16:28:02.516214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.526 [2024-11-20 16:28:02.516236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.526 [2024-11-20 16:28:02.516244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.526 [2024-11-20 16:28:02.521989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.526 [2024-11-20 16:28:02.522010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.526 [2024-11-20 16:28:02.522018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.526 [2024-11-20 16:28:02.527635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.526 [2024-11-20 16:28:02.527656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.526 [2024-11-20 16:28:02.527664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.526 [2024-11-20 16:28:02.532966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.526 [2024-11-20 16:28:02.532987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.526 [2024-11-20 16:28:02.532995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.526 [2024-11-20 16:28:02.538335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.526 [2024-11-20 16:28:02.538356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.526 [2024-11-20 16:28:02.538364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.526 [2024-11-20 16:28:02.543678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.526 [2024-11-20 16:28:02.543700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.526 [2024-11-20 16:28:02.543707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.526 [2024-11-20 16:28:02.549068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.526 [2024-11-20 16:28:02.549090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.526 [2024-11-20 16:28:02.549097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.526 [2024-11-20 16:28:02.554326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.554347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.554355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.559803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.559825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.559832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.564523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.564544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.564558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.569811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.569833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.569840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.575350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.575372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.575380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.580710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.580732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.580739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.586078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.586100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.586108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.591547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.591569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.591577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.597338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.597362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.597370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.602833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.602855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.602863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.608006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.608028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.608036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.613742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.613768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.613776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.619311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.619333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.619341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.624915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.624937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.624945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.630514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.630536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.630544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.635971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.635993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.636001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.641285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.641306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.641314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.646448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.646470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.646478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.651539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.651561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.651569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.656715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.656737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.656745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.661908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.661928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.661936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.667081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.667102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.527 [2024-11-20 16:28:02.667110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.527 [2024-11-20 16:28:02.672248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.527 [2024-11-20 16:28:02.672269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.672277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.677425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.677446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.677454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.682594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.682616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.682624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.687785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.687807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.687816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.693037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.693058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.693066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.698359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.698380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.698389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.703523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.703545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.703556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.708749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.708770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.708778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.713990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.714012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.714020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.719181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.719209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.719217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.724342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.724364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.724372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.729537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.729558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.729566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.734784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.734807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.734816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.740077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.740099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.740106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.745335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.745356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.745364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.528 [2024-11-20 16:28:02.750455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.528 [2024-11-20 16:28:02.750479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.528 [2024-11-20 16:28:02.750487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.788 [2024-11-20 16:28:02.755726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.788 [2024-11-20 16:28:02.755748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.788 [2024-11-20 16:28:02.755755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.788 [2024-11-20 16:28:02.761007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.788 [2024-11-20 16:28:02.761029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.788 [2024-11-20 16:28:02.761037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.788 [2024-11-20 16:28:02.766245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.788 [2024-11-20 16:28:02.766266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.788 [2024-11-20 16:28:02.766273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.788 [2024-11-20 16:28:02.771460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.788 [2024-11-20 16:28:02.771482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.788 [2024-11-20 16:28:02.771490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.788 [2024-11-20 16:28:02.776739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.788 [2024-11-20 16:28:02.776760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.788 [2024-11-20 16:28:02.776768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.788 [2024-11-20 16:28:02.781920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.788 [2024-11-20 16:28:02.781941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.788 [2024-11-20 16:28:02.781950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.788 [2024-11-20 16:28:02.787159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.788 [2024-11-20 16:28:02.787181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.788 [2024-11-20 16:28:02.787189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.788 [2024-11-20 16:28:02.792325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.788 [2024-11-20 16:28:02.792346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.788 [2024-11-20 16:28:02.792354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.788 [2024-11-20 16:28:02.797469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.788 [2024-11-20 16:28:02.797490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.788 [2024-11-20 16:28:02.797498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.788 [2024-11-20 16:28:02.802675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.788 [2024-11-20 16:28:02.802696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.802704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.807915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.807936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.807944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.813160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.813182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.813190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.818385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.818406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.818413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.823560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.823581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.823589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.828791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.828812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.828820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.834012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.834033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.834041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.839225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.839246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.839257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.844400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.844422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.844429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.849622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.849643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.849651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.854867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.854889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.854897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.860126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.860147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.860155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.865320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.865341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.865349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.870498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.870519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.870527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.875650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.875672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.875679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.880806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.880827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.880835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.885992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.886014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.886021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.891164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.891185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.891193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.896327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.896348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.896355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.901488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.901509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.901517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.906678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.906699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.906707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.911876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.911898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.911906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.917081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.917102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.917110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.922290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.922311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.922318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.927443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.927465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.927476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.932655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.932676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.932685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.937875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.937897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.789 [2024-11-20 16:28:02.937905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.789 [2024-11-20 16:28:02.943035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.789 [2024-11-20 16:28:02.943056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.790 [2024-11-20 16:28:02.943064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.790 [2024-11-20 16:28:02.948198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.790 [2024-11-20 16:28:02.948225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.790 [2024-11-20 16:28:02.948233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.790 [2024-11-20 16:28:02.953396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.790 [2024-11-20 16:28:02.953417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.790 [2024-11-20 16:28:02.953425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.790 [2024-11-20 16:28:02.958943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.790 [2024-11-20 16:28:02.958965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.790 [2024-11-20 16:28:02.958973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.790 [2024-11-20 16:28:02.966032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b64e0) 00:26:31.790 [2024-11-20 16:28:02.966054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.790 [2024-11-20 16:28:02.966062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.790 5784.00 IOPS, 723.00 MiB/s 00:26:31.790 Latency(us) 00:26:31.790 [2024-11-20T15:28:03.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.790 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:31.790 nvme0n1 : 2.00 5786.36 723.29 0.00 0.00 2762.53 655.36 10860.25 00:26:31.790 [2024-11-20T15:28:03.024Z] =================================================================================================================== 00:26:31.790 [2024-11-20T15:28:03.024Z] Total : 5786.36 723.29 0.00 0.00 2762.53 655.36 10860.25 00:26:31.790 { 00:26:31.790 "results": [ 00:26:31.790 { 00:26:31.790 "job": "nvme0n1", 00:26:31.790 "core_mask": "0x2", 00:26:31.790 "workload": "randread", 00:26:31.790 "status": "finished", 00:26:31.790 "queue_depth": 16, 00:26:31.790 "io_size": 131072, 00:26:31.790 "runtime": 2.001951, 00:26:31.790 "iops": 5786.355410297255, 00:26:31.790 "mibps": 723.2944262871569, 00:26:31.790 "io_failed": 0, 00:26:31.790 "io_timeout": 0, 00:26:31.790 "avg_latency_us": 2762.5348276769273, 00:26:31.790 "min_latency_us": 655.36, 00:26:31.790 "max_latency_us": 10860.251428571428 00:26:31.790 } 00:26:31.790 ], 00:26:31.790 "core_count": 1 00:26:31.790 } 00:26:31.790 16:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:31.790 16:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:31.790 16:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:31.790 16:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:31.790 | .driver_specific 00:26:31.790 | .nvme_error 00:26:31.790 | .status_code 00:26:31.790 | .command_transient_transport_error' 00:26:32.049 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 374 > 0 )) 00:26:32.049 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2068783 00:26:32.049 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2068783 ']' 00:26:32.049 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2068783 00:26:32.049 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:32.049 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.049 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2068783 00:26:32.049 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:32.049 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:32.049 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2068783' 00:26:32.049 killing process with pid 2068783 00:26:32.049 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2068783 00:26:32.049 Received shutdown signal, test time was about 2.000000 seconds 00:26:32.049 00:26:32.049 Latency(us) 00:26:32.049 [2024-11-20T15:28:03.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.049 [2024-11-20T15:28:03.283Z] =================================================================================================================== 00:26:32.049 [2024-11-20T15:28:03.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.049 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2068783 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2069264 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2069264 /var/tmp/bperf.sock 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2069264 ']' 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.307 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.307 [2024-11-20 16:28:03.441798] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:32.307 [2024-11-20 16:28:03.441844] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069264 ] 00:26:32.307 [2024-11-20 16:28:03.515705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.565 [2024-11-20 16:28:03.559059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.565 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.565 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:32.565 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.565 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.823 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:32.823 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.823 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.823 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.823 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.823 16:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.081 nvme0n1 00:26:33.081 16:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:33.081 16:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.081 16:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.081 16:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.081 16:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:33.081 16:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:33.340 Running I/O for 2 seconds... 00:26:33.340 [2024-11-20 16:28:04.416427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee1b48 00:26:33.340 [2024-11-20 16:28:04.417344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.417377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:33.340 [2024-11-20 16:28:04.425152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eeb760 00:26:33.340 [2024-11-20 16:28:04.426104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.426125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:33.340 [2024-11-20 16:28:04.434721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef8e88 00:26:33.340 [2024-11-20 16:28:04.435784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.435804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:33.340 [2024-11-20 16:28:04.444292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef46d0 00:26:33.340 [2024-11-20 16:28:04.445491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.445511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:33.340 [2024-11-20 16:28:04.453828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee3498 00:26:33.340 [2024-11-20 16:28:04.455125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.455145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:33.340 [2024-11-20 16:28:04.461768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef4f40 00:26:33.340 [2024-11-20 16:28:04.462643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.462662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:33.340 [2024-11-20 16:28:04.470736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef4f40 00:26:33.340 [2024-11-20 16:28:04.471641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.471660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:33.340 [2024-11-20 16:28:04.479942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef3e60 00:26:33.340 [2024-11-20 16:28:04.480684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.480703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.340 [2024-11-20 16:28:04.489137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eeaef0 00:26:33.340 [2024-11-20 16:28:04.490089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.490108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.340 [2024-11-20 16:28:04.498187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efcdd0 00:26:33.340 [2024-11-20 16:28:04.499142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.499160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.340 [2024-11-20 16:28:04.507220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efef90 00:26:33.340 [2024-11-20 16:28:04.508170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.508188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.340 [2024-11-20 16:28:04.516215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efda78 00:26:33.340 [2024-11-20 16:28:04.517159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.517178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.340 [2024-11-20 16:28:04.525226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee3d08 00:26:33.340 [2024-11-20 16:28:04.526174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.526192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.340 [2024-11-20 16:28:04.534250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee4de8 00:26:33.340 [2024-11-20 16:28:04.535195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.340 [2024-11-20 16:28:04.535218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.341 [2024-11-20 16:28:04.543306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eec408 00:26:33.341 [2024-11-20 16:28:04.544252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.341 [2024-11-20 16:28:04.544271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.341 [2024-11-20 16:28:04.552301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eed4e8 00:26:33.341 [2024-11-20 16:28:04.553253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.341 [2024-11-20 16:28:04.553272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.341 [2024-11-20 16:28:04.561324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef4b08 00:26:33.341 [2024-11-20 16:28:04.562284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.341 [2024-11-20 16:28:04.562302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.341 [2024-11-20 16:28:04.570565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef0350 00:26:33.600 [2024-11-20 16:28:04.571637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.571656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.579842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eef270 00:26:33.600 [2024-11-20 16:28:04.580807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.580826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.588857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eee190 00:26:33.600 [2024-11-20 16:28:04.589838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.589857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.597906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee23b8 00:26:33.600 [2024-11-20 16:28:04.598854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.598873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.606899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee6fa8 00:26:33.600 [2024-11-20 16:28:04.607910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.607929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.615898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee88f8 00:26:33.600 [2024-11-20 16:28:04.616854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.616873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.624887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee99d8 00:26:33.600 [2024-11-20 16:28:04.625845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.625863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.633897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eeaab8 00:26:33.600 [2024-11-20 16:28:04.634851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.634870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.642881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efc998 00:26:33.600 [2024-11-20 16:28:04.643887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.643906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.651283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef3a28 00:26:33.600 [2024-11-20 16:28:04.652236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.652258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.660770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef5378 00:26:33.600 [2024-11-20 16:28:04.661819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.661838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.669092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eebfd0 00:26:33.600 [2024-11-20 16:28:04.669828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.669848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.678188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eed0b0 00:26:33.600 [2024-11-20 16:28:04.678938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.678957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.687326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef46d0 00:26:33.600 [2024-11-20 16:28:04.688075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.688094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.696501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef0788 00:26:33.600 [2024-11-20 16:28:04.697235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.697254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.705526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eef6a8 00:26:33.600 [2024-11-20 16:28:04.706248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.706267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.714561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef2d80 00:26:33.600 [2024-11-20 16:28:04.715291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.715310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.723552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef1ca0 00:26:33.600 [2024-11-20 16:28:04.724280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.724299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.732552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef0bc0 00:26:33.600 [2024-11-20 16:28:04.733288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.733307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.741713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef5be8 00:26:33.600 [2024-11-20 16:28:04.742452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.600 [2024-11-20 16:28:04.742472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.600 [2024-11-20 16:28:04.750705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef6cc8 00:26:33.600 [2024-11-20 16:28:04.751443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.601 [2024-11-20 16:28:04.751462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.601 [2024-11-20 16:28:04.759965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef92c0 00:26:33.601 [2024-11-20 16:28:04.760471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.601 [2024-11-20 16:28:04.760491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:33.601 [2024-11-20 16:28:04.769339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eed4e8 00:26:33.601 [2024-11-20 16:28:04.769965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.601 [2024-11-20 16:28:04.769985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:33.601 [2024-11-20 16:28:04.778756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef0ff8 00:26:33.601 [2024-11-20 16:28:04.779496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.601 [2024-11-20 16:28:04.779516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:33.601 [2024-11-20 16:28:04.787222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef4298 00:26:33.601 [2024-11-20 16:28:04.787888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.601 [2024-11-20 16:28:04.787907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:33.601 [2024-11-20 16:28:04.796369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efcdd0 00:26:33.601 [2024-11-20 16:28:04.797389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.601 [2024-11-20 16:28:04.797407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:33.601 [2024-11-20 16:28:04.806339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efeb58 00:26:33.601 [2024-11-20 16:28:04.807663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.601 [2024-11-20 16:28:04.807682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:33.601 [2024-11-20 16:28:04.815121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee01f8 00:26:33.601 [2024-11-20 16:28:04.816427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.601 [2024-11-20 16:28:04.816445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:33.601 [2024-11-20 16:28:04.824551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efdeb0 00:26:33.601 [2024-11-20 16:28:04.826039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.601 [2024-11-20 16:28:04.826057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:33.887 [2024-11-20 16:28:04.831474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef0ff8 00:26:33.887 [2024-11-20 16:28:04.832233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.887 [2024-11-20 16:28:04.832251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:33.887 [2024-11-20 16:28:04.841462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef0ff8 00:26:33.888 [2024-11-20 16:28:04.842099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.842117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.851087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efc998 00:26:33.888 [2024-11-20 16:28:04.852079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.852098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.860324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee0a68 00:26:33.888 [2024-11-20 16:28:04.860846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.860865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.869428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efb8b8 00:26:33.888 [2024-11-20 16:28:04.870183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.870207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.879609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efb8b8 00:26:33.888 [2024-11-20 16:28:04.880920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.880939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.888694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eec408 00:26:33.888 [2024-11-20 16:28:04.889995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.890017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.896801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef81e0 00:26:33.888 [2024-11-20 16:28:04.897833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.897852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.905780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efdeb0 00:26:33.888 [2024-11-20 16:28:04.906800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.906819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.915396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee3498 00:26:33.888 [2024-11-20 16:28:04.916599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.916618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.924826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eed0b0 00:26:33.888 [2024-11-20 16:28:04.926209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.926229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.931640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee3498 00:26:33.888 [2024-11-20 16:28:04.932291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.932310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.942808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eec840 00:26:33.888 [2024-11-20 16:28:04.943856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.943875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.951969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016edece0 00:26:33.888 [2024-11-20 16:28:04.952880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.952901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.961554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee1710 00:26:33.888 [2024-11-20 16:28:04.962794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.962814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.969713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef2510 00:26:33.888 [2024-11-20 16:28:04.970915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.970935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.978051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ede470 00:26:33.888 [2024-11-20 16:28:04.978631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.978650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.987340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efdeb0 00:26:33.888 [2024-11-20 16:28:04.988155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.988176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:04.996330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eecc78 00:26:33.888 [2024-11-20 16:28:04.997222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:04.997242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:05.007885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef2948 00:26:33.888 [2024-11-20 16:28:05.009283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:05.009302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:05.014512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef81e0 00:26:33.888 [2024-11-20 16:28:05.015152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:05.015171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:05.023912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee3060 00:26:33.888 [2024-11-20 16:28:05.024621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:05.024640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:33.888 [2024-11-20 16:28:05.033275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef6cc8 00:26:33.888 [2024-11-20 16:28:05.034084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.888 [2024-11-20 16:28:05.034103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:33.889 [2024-11-20 16:28:05.043986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef6cc8 00:26:33.889 [2024-11-20 16:28:05.045373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.889 [2024-11-20 16:28:05.045392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:33.889 [2024-11-20 16:28:05.053222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee5a90 00:26:33.889 [2024-11-20 16:28:05.054499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.889 [2024-11-20 16:28:05.054518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:33.889 [2024-11-20 16:28:05.060711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef7970 00:26:33.889 [2024-11-20 16:28:05.061653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.889 [2024-11-20 16:28:05.061673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.889 [2024-11-20 16:28:05.069853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eedd58 00:26:33.889 [2024-11-20 16:28:05.070558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.889 [2024-11-20 16:28:05.070578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.889 [2024-11-20 16:28:05.078665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee5220 00:26:33.889 [2024-11-20 16:28:05.079357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.889 [2024-11-20 16:28:05.079377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.889 [2024-11-20 16:28:05.086967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee3d08 00:26:33.889 [2024-11-20 16:28:05.087796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.889 [2024-11-20 16:28:05.087815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.096791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efda78 00:26:34.149 [2024-11-20 16:28:05.097657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.097676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.106227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee73e0 00:26:34.149 [2024-11-20 16:28:05.107238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.107258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.115632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eeb328 00:26:34.149 [2024-11-20 16:28:05.116662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.116680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.124703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ede8a8 00:26:34.149 [2024-11-20 16:28:05.125803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.125825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.133876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efcdd0 00:26:34.149 [2024-11-20 16:28:05.134984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.135004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.142659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eed4e8 00:26:34.149 [2024-11-20 16:28:05.143329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.143349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.151650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef0350 00:26:34.149 [2024-11-20 16:28:05.152559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.152579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.160751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eedd58 00:26:34.149 [2024-11-20 16:28:05.161655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.161675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.169737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eedd58 00:26:34.149 [2024-11-20 16:28:05.170640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.170660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.178963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eedd58 00:26:34.149 [2024-11-20 16:28:05.179898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.179918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.187788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eed4e8 00:26:34.149 [2024-11-20 16:28:05.188742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.188762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.197068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eec408 00:26:34.149 [2024-11-20 16:28:05.197984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.198003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.206376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eebfd0 00:26:34.149 [2024-11-20 16:28:05.207406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.207424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.216135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef3a28 00:26:34.149 [2024-11-20 16:28:05.217199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.217224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.226334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efb8b8 00:26:34.149 [2024-11-20 16:28:05.227306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.227327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.236080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efa3a0 00:26:34.149 [2024-11-20 16:28:05.237217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.237238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.246774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee4140 00:26:34.149 [2024-11-20 16:28:05.247862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.247882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.257037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eeaab8 00:26:34.149 [2024-11-20 16:28:05.258186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.258210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.267220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eec408 00:26:34.149 [2024-11-20 16:28:05.267874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.267895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.277630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef0bc0 00:26:34.149 [2024-11-20 16:28:05.278376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.278396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.287029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ede038 00:26:34.149 [2024-11-20 16:28:05.288241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.288260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.296746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eea248 00:26:34.149 [2024-11-20 16:28:05.297797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.149 [2024-11-20 16:28:05.297816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:34.149 [2024-11-20 16:28:05.306748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee95a0 00:26:34.150 [2024-11-20 16:28:05.308002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.150 [2024-11-20 16:28:05.308021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:34.150 [2024-11-20 16:28:05.316727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efb048 00:26:34.150 [2024-11-20 16:28:05.317491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.150 [2024-11-20 16:28:05.317512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:34.150 [2024-11-20 16:28:05.325991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef2d80 00:26:34.150 [2024-11-20 16:28:05.327376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.150 [2024-11-20 16:28:05.327396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:34.150 [2024-11-20 16:28:05.334418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016edf550 00:26:34.150 [2024-11-20 16:28:05.335157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.150 [2024-11-20 16:28:05.335177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:34.150 [2024-11-20 16:28:05.346229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efc560 00:26:34.150 [2024-11-20 16:28:05.347485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.150 [2024-11-20 16:28:05.347506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:34.150 [2024-11-20 16:28:05.355407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016edfdc0 00:26:34.150 [2024-11-20 16:28:05.356363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.150 [2024-11-20 16:28:05.356383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:34.150 [2024-11-20 16:28:05.366030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef2510 00:26:34.150 [2024-11-20 16:28:05.367126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.150 [2024-11-20 16:28:05.367146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:34.150 [2024-11-20 16:28:05.376259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eed4e8 00:26:34.150 [2024-11-20 16:28:05.376819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.150 [2024-11-20 16:28:05.376842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:34.408 [2024-11-20 16:28:05.384790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee73e0 00:26:34.408 [2024-11-20 16:28:05.385549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.408 [2024-11-20 16:28:05.385568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:34.408 [2024-11-20 16:28:05.396160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efe720 00:26:34.408 [2024-11-20 16:28:05.397268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.408 [2024-11-20 16:28:05.397287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:34.408 27390.00 IOPS, 106.99 MiB/s [2024-11-20T15:28:05.642Z] [2024-11-20 16:28:05.406910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef2510 00:26:34.408 [2024-11-20 16:28:05.408134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.408 [2024-11-20 16:28:05.408154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:34.408 [2024-11-20 16:28:05.415037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef6458 00:26:34.408 [2024-11-20 16:28:05.416015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.408 [2024-11-20 16:28:05.416035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:34.408 [2024-11-20 16:28:05.425785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef6458 00:26:34.409 [2024-11-20 16:28:05.427331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.427351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.432133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016edf118 00:26:34.409 [2024-11-20 16:28:05.432883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.432903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.440948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef6020 00:26:34.409 [2024-11-20 16:28:05.441678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.441697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.450624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee1f80 00:26:34.409 [2024-11-20 16:28:05.451462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.451481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.461799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee5a90 00:26:34.409 [2024-11-20 16:28:05.463111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.463130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.470111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efd208 00:26:34.409 [2024-11-20 16:28:05.471293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.471312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.479130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef1868 00:26:34.409 [2024-11-20 16:28:05.480167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.480187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.489412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eed4e8 00:26:34.409 [2024-11-20 16:28:05.490840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.490859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.496567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee6fa8 00:26:34.409 [2024-11-20 16:28:05.497531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.497551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.505679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee6b70 00:26:34.409 [2024-11-20 16:28:05.506641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.506660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.515019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eebb98 00:26:34.409 [2024-11-20 16:28:05.515961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.515981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.523929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eeb760 00:26:34.409 [2024-11-20 16:28:05.525009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.525028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.534961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee6b70 00:26:34.409 [2024-11-20 16:28:05.536524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.536541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.541278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee95a0 00:26:34.409 [2024-11-20 16:28:05.541928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.541947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.551316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee8d30 00:26:34.409 [2024-11-20 16:28:05.552397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.552416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.560580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee23b8 00:26:34.409 [2024-11-20 16:28:05.561576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.561596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.570084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef3a28 00:26:34.409 [2024-11-20 16:28:05.571403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.571423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.579487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee8d30 00:26:34.409 [2024-11-20 16:28:05.580928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.580946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.588897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eed0b0 00:26:34.409 [2024-11-20 16:28:05.590464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.590482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.595232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef9f68 00:26:34.409 [2024-11-20 16:28:05.595978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.409 [2024-11-20 16:28:05.595996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:34.409 [2024-11-20 16:28:05.605171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eea248 00:26:34.410 [2024-11-20 16:28:05.606247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.410 [2024-11-20 16:28:05.606266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:34.410 [2024-11-20 16:28:05.614573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eec408 00:26:34.410 [2024-11-20 16:28:05.615572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.410 [2024-11-20 16:28:05.615594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.410 [2024-11-20 16:28:05.623011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef96f8 00:26:34.410 [2024-11-20 16:28:05.624170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.410 [2024-11-20 16:28:05.624190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:34.410 [2024-11-20 16:28:05.632163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee23b8 00:26:34.410 [2024-11-20 16:28:05.633255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.410 [2024-11-20 16:28:05.633275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.641961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee38d0 00:26:34.669 [2024-11-20 16:28:05.643230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.643249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.650282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee49b0 00:26:34.669 [2024-11-20 16:28:05.651501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.651521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.660307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eefae0 00:26:34.669 [2024-11-20 16:28:05.661324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.661343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.668335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eeff18 00:26:34.669 [2024-11-20 16:28:05.669400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.669419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.677463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef6020 00:26:34.669 [2024-11-20 16:28:05.678457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.678476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.686602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eeea00 00:26:34.669 [2024-11-20 16:28:05.687642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.687661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.696085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efd208 00:26:34.669 [2024-11-20 16:28:05.697007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.697026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.705729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eed4e8 00:26:34.669 [2024-11-20 16:28:05.706890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.706909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.714226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee8088 00:26:34.669 [2024-11-20 16:28:05.715226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.715245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.722941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eecc78 00:26:34.669 [2024-11-20 16:28:05.723909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.723928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.732330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efeb58 00:26:34.669 [2024-11-20 16:28:05.733438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.733457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.741740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee9e10 00:26:34.669 [2024-11-20 16:28:05.742972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.742991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.750008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eeee38 00:26:34.669 [2024-11-20 16:28:05.751228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.751247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.759131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef6458 00:26:34.669 [2024-11-20 16:28:05.760013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.760032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.768627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eff3c8 00:26:34.669 [2024-11-20 16:28:05.769549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.769568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.777906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eec408 00:26:34.669 [2024-11-20 16:28:05.778935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.778953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.785494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efcdd0 00:26:34.669 [2024-11-20 16:28:05.785922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.785941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.793587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee6b70 00:26:34.669 [2024-11-20 16:28:05.794132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.794150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.804463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efa3a0 00:26:34.669 [2024-11-20 16:28:05.805606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.805625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.813661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef2948 00:26:34.669 [2024-11-20 16:28:05.814351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.669 [2024-11-20 16:28:05.814371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.669 [2024-11-20 16:28:05.822182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef0788 00:26:34.669 [2024-11-20 16:28:05.823372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.670 [2024-11-20 16:28:05.823390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.670 [2024-11-20 16:28:05.830383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef81e0 00:26:34.670 [2024-11-20 16:28:05.831049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.670 [2024-11-20 16:28:05.831068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.670 [2024-11-20 16:28:05.839818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee4de8 00:26:34.670 [2024-11-20 16:28:05.840609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.670 [2024-11-20 16:28:05.840630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:34.670 [2024-11-20 16:28:05.848055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eddc00 00:26:34.670 [2024-11-20 16:28:05.848708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.670 [2024-11-20 16:28:05.848730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.670 [2024-11-20 16:28:05.857387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee95a0 00:26:34.670 [2024-11-20 16:28:05.857948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.670 [2024-11-20 16:28:05.857967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.670 [2024-11-20 16:28:05.866859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efda78 00:26:34.670 [2024-11-20 16:28:05.867640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.670 [2024-11-20 16:28:05.867658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:34.670 [2024-11-20 16:28:05.875814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eecc78 00:26:34.670 [2024-11-20 16:28:05.876684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.670 [2024-11-20 16:28:05.876703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:34.670 [2024-11-20 16:28:05.885226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee49b0 00:26:34.670 [2024-11-20 16:28:05.886205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.670 [2024-11-20 16:28:05.886222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.670 [2024-11-20 16:28:05.894348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef92c0 00:26:34.670 [2024-11-20 16:28:05.895388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.670 [2024-11-20 16:28:05.895407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:34.929 [2024-11-20 16:28:05.903585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efc560 00:26:34.929 [2024-11-20 16:28:05.904596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.929 [2024-11-20 16:28:05.904615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:34.929 [2024-11-20 16:28:05.912804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee84c0 00:26:34.929 [2024-11-20 16:28:05.913775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.929 [2024-11-20 16:28:05.913794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:34.929 [2024-11-20 16:28:05.922233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef8a50 00:26:34.929 [2024-11-20 16:28:05.923207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.929 [2024-11-20 16:28:05.923226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.929 [2024-11-20 16:28:05.931233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee12d8 00:26:34.929 [2024-11-20 16:28:05.932140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.929 [2024-11-20 16:28:05.932162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.929 [2024-11-20 16:28:05.940198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee5220 00:26:34.929 [2024-11-20 16:28:05.941215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.929 [2024-11-20 16:28:05.941234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.929 [2024-11-20 16:28:05.949062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee49b0 00:26:34.929 [2024-11-20 16:28:05.949989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.929 [2024-11-20 16:28:05.950009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:34.929 [2024-11-20 16:28:05.958876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef3a28 00:26:34.929 [2024-11-20 16:28:05.959916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.929 [2024-11-20 16:28:05.959934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:34.929 [2024-11-20 16:28:05.968156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee2c28 00:26:34.929 [2024-11-20 16:28:05.969272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.929 [2024-11-20 16:28:05.969291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.929 [2024-11-20 16:28:05.975513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efe2e8 00:26:34.929 [2024-11-20 16:28:05.976170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.929 [2024-11-20 16:28:05.976189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:34.929 [2024-11-20 16:28:05.984759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef4b08 00:26:34.929 [2024-11-20 16:28:05.985340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.929 [2024-11-20 16:28:05.985359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:05.993916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eee5c8 00:26:34.930 [2024-11-20 16:28:05.994691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:05.994710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.002908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eed0b0 00:26:34.930 [2024-11-20 16:28:06.003714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.003733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.011877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016edf550 00:26:34.930 [2024-11-20 16:28:06.012655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.012673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.020886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eea680 00:26:34.930 [2024-11-20 16:28:06.021667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.021685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.030134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee6738 00:26:34.930 [2024-11-20 16:28:06.030806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.030826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.039287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee5ec8 00:26:34.930 [2024-11-20 16:28:06.040196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.040218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.048514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efac10 00:26:34.930 [2024-11-20 16:28:06.049213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.049233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.057069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee3d08 00:26:34.930 [2024-11-20 16:28:06.058283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.058301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.065566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eed4e8 00:26:34.930 [2024-11-20 16:28:06.066261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.066281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.074555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee27f0 00:26:34.930 [2024-11-20 16:28:06.075224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.075244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.083549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efa3a0 00:26:34.930 [2024-11-20 16:28:06.084229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.084248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.092565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef5be8 00:26:34.930 [2024-11-20 16:28:06.093260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.093278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.101622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016edf118 00:26:34.930 [2024-11-20 16:28:06.102291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.102310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.110616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efda78 00:26:34.930 [2024-11-20 16:28:06.111307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.111326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.119596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016edf550 00:26:34.930 [2024-11-20 16:28:06.120281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.120300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.128599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eed0b0 00:26:34.930 [2024-11-20 16:28:06.129205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.129224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.137893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee3060 00:26:34.930 [2024-11-20 16:28:06.138670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.138689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.146410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef6020 00:26:34.930 [2024-11-20 16:28:06.147183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.147206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:34.930 [2024-11-20 16:28:06.156424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee8d30 00:26:34.930 [2024-11-20 16:28:06.157376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.930 [2024-11-20 16:28:06.157397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.189 [2024-11-20 16:28:06.165922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef8a50 00:26:35.189 [2024-11-20 16:28:06.166841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.189 [2024-11-20 16:28:06.166863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:35.189 [2024-11-20 16:28:06.174934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eedd58 00:26:35.189 [2024-11-20 16:28:06.175868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.189 [2024-11-20 16:28:06.175887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:35.189 [2024-11-20 16:28:06.183903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef0350 00:26:35.189 [2024-11-20 16:28:06.184820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.189 [2024-11-20 16:28:06.184839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:35.189 [2024-11-20 16:28:06.193215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efd640 00:26:35.189 [2024-11-20 16:28:06.194226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.189 [2024-11-20 16:28:06.194247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:35.189 [2024-11-20 16:28:06.202431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eecc78 00:26:35.190 [2024-11-20 16:28:06.203489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.203509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.211862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efac10 00:26:35.190 [2024-11-20 16:28:06.213033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.213052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.219705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee0ea0 00:26:35.190 [2024-11-20 16:28:06.220193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.220217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.229912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efe2e8 00:26:35.190 [2024-11-20 16:28:06.231042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.231060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.237519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efac10 00:26:35.190 [2024-11-20 16:28:06.238197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.238220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.246609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eea680 00:26:35.190 [2024-11-20 16:28:06.247088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.247108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.255647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef6458 00:26:35.190 [2024-11-20 16:28:06.256411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.256430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.264174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efe720 00:26:35.190 [2024-11-20 16:28:06.264959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.264978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.273604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efe2e8 00:26:35.190 [2024-11-20 16:28:06.274486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.274504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.283033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee12d8 00:26:35.190 [2024-11-20 16:28:06.284032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.284050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.292451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016edece0 00:26:35.190 [2024-11-20 16:28:06.293561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.293580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.300864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016efe720 00:26:35.190 [2024-11-20 16:28:06.301660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.301678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.309716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016edf118 00:26:35.190 [2024-11-20 16:28:06.310520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.310538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.318677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eeaab8 00:26:35.190 [2024-11-20 16:28:06.319400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.319420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.327971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef6890 00:26:35.190 [2024-11-20 16:28:06.328554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.328573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.337382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ede038 00:26:35.190 [2024-11-20 16:28:06.338074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.338093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.347743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee7818 00:26:35.190 [2024-11-20 16:28:06.349256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.349274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.354087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef46d0 00:26:35.190 [2024-11-20 16:28:06.354780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.354799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.363494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016eeee38 00:26:35.190 [2024-11-20 16:28:06.364320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.364340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.372727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef2510 00:26:35.190 [2024-11-20 16:28:06.373216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.373236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.382246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016edfdc0 00:26:35.190 [2024-11-20 16:28:06.383190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.383213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.393705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ee84c0 00:26:35.190 [2024-11-20 16:28:06.395138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.395157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:35.190 [2024-11-20 16:28:06.400980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a180) with pdu=0x200016ef8e88 00:26:35.190 [2024-11-20 16:28:06.401930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.190 [2024-11-20 16:28:06.401954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:35.190 27821.50 IOPS, 108.68 MiB/s 00:26:35.190 Latency(us) 00:26:35.190 [2024-11-20T15:28:06.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.190 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:35.190 nvme0n1 : 2.00 27824.91 108.69 0.00 0.00 4595.09 1841.25 13856.18 00:26:35.190 [2024-11-20T15:28:06.424Z] =================================================================================================================== 00:26:35.190 [2024-11-20T15:28:06.424Z] Total : 27824.91 108.69 0.00 0.00 4595.09 1841.25 13856.18 00:26:35.190 { 00:26:35.190 "results": [ 00:26:35.190 { 00:26:35.190 "job": "nvme0n1", 00:26:35.190 "core_mask": "0x2", 00:26:35.190 "workload": "randwrite", 00:26:35.190 "status": "finished", 00:26:35.190 "queue_depth": 128, 00:26:35.190 "io_size": 4096, 00:26:35.190 "runtime": 2.003744, 00:26:35.190 "iops": 27824.91176517559, 00:26:35.190 "mibps": 108.69106158271715, 00:26:35.190 "io_failed": 0, 00:26:35.190 "io_timeout": 0, 00:26:35.190 "avg_latency_us": 4595.08695483732, 00:26:35.190 "min_latency_us": 1841.249523809524, 00:26:35.190 "max_latency_us": 13856.182857142858 00:26:35.190 } 00:26:35.190 ], 00:26:35.190 "core_count": 1 00:26:35.190 } 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:35.449 | .driver_specific 00:26:35.449 | .nvme_error 00:26:35.449 | .status_code 00:26:35.449 | .command_transient_transport_error' 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2069264 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2069264 ']' 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2069264 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2069264 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2069264' 00:26:35.449 killing process with pid 2069264 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2069264 00:26:35.449 Received shutdown signal, test time was about 2.000000 seconds 00:26:35.449 00:26:35.449 Latency(us) 00:26:35.449 [2024-11-20T15:28:06.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.449 [2024-11-20T15:28:06.683Z] =================================================================================================================== 00:26:35.449 [2024-11-20T15:28:06.683Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.449 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2069264 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2069948 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2069948 /var/tmp/bperf.sock 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2069948 ']' 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.707 16:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.707 [2024-11-20 16:28:06.881185] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:35.707 [2024-11-20 16:28:06.881258] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069948 ] 00:26:35.707 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.708 Zero copy mechanism will not be used. 00:26:35.966 [2024-11-20 16:28:06.955917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.966 [2024-11-20 16:28:06.993530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.966 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.966 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:35.966 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.966 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:36.225 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:36.225 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.225 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.225 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.225 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.225 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.482 nvme0n1 00:26:36.482 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:36.482 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.482 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.482 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.482 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:36.482 16:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:36.482 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:36.482 Zero copy mechanism will not be used. 00:26:36.482 Running I/O for 2 seconds... 00:26:36.482 [2024-11-20 16:28:07.705294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.482 [2024-11-20 16:28:07.705369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.482 [2024-11-20 16:28:07.705402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.482 [2024-11-20 16:28:07.711617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.482 [2024-11-20 16:28:07.711751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.482 [2024-11-20 16:28:07.711777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.741 [2024-11-20 16:28:07.717046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.741 [2024-11-20 16:28:07.717223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.741 [2024-11-20 16:28:07.717245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.723688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.723844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.723865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.730891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.731033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.731053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.737133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.737281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.737301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.742355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.742447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.742466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.747642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.747732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.747751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.753550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.753604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.753623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.758850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.758923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.758942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.763925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.764017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.764036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.768956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.769047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.769065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.773761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.773857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.773875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.778468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.778551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.778569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.782905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.782971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.782990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.787148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.787220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.787244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.791450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.791508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.791526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.795704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.795768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.795786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.799989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.800048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.800067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.804592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.804753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.804772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.809988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.810090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.810108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.814971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.815075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.815095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.820511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.820763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.820783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.742 [2024-11-20 16:28:07.826871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.742 [2024-11-20 16:28:07.827167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.742 [2024-11-20 16:28:07.827187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.832911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.833219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.833240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.839623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.839838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.839858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.846502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.846754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.846774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.852938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.853236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.853257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.858169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.858441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.858462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.863446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.863706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.863726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.868257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.868508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.868527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.872984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.873244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.873265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.877765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.878015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.878035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.882778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.883026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.883047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.887886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.888140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.888161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.892989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.893257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.893277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.899048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.899297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.899317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.904011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.904267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.904287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.909130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.909386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.909406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.914597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.914853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.914874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.919582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.919711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.919729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.924491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.924746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.924775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.929461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.929756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.929777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.935255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.935500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.935521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.940747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.940994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.941015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.743 [2024-11-20 16:28:07.946443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.743 [2024-11-20 16:28:07.946694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.743 [2024-11-20 16:28:07.946715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.744 [2024-11-20 16:28:07.951573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.744 [2024-11-20 16:28:07.951822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.744 [2024-11-20 16:28:07.951842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.744 [2024-11-20 16:28:07.955905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.744 [2024-11-20 16:28:07.956170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.744 [2024-11-20 16:28:07.956188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.744 [2024-11-20 16:28:07.960157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.744 [2024-11-20 16:28:07.960423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.744 [2024-11-20 16:28:07.960445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.744 [2024-11-20 16:28:07.964375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.744 [2024-11-20 16:28:07.964641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.744 [2024-11-20 16:28:07.964662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.744 [2024-11-20 16:28:07.968697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:36.744 [2024-11-20 16:28:07.968970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.744 [2024-11-20 16:28:07.968991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.004 [2024-11-20 16:28:07.973080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.004 [2024-11-20 16:28:07.973352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.004 [2024-11-20 16:28:07.973373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:07.977332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:07.977569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:07.977590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:07.981759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:07.982023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:07.982044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:07.986247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:07.986497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:07.986517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:07.990832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:07.991106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:07.991126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:07.995571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:07.995832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:07.995852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.000540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.000807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.000828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.005421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.005690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.005711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.010866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.011109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.011129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.016047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.016320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.016341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.023112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.023437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.023459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.029679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.029951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.029972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.036113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.036419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.036440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.042058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.042312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.042333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.047957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.048237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.048257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.054330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.054615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.054635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.059492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.059750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.059775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.064483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.064746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.064767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.069324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.069588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.069608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.074855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.075103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.075125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.079414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.079657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.079677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.084563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.084801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.084826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.089724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.089954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.089973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.094743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.094996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.095015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.005 [2024-11-20 16:28:08.099949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.005 [2024-11-20 16:28:08.100182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.005 [2024-11-20 16:28:08.100211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.104999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.105265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.105286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.109941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.110181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.110209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.114420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.114663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.114684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.118885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.119118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.119139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.123436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.123675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.123695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.127489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.127733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.127753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.131788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.132026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.132047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.136114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.136355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.136375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.140461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.140714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.140735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.144697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.144948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.144969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.148905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.149139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.149160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.153181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.153444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.153464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.157865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.158110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.158130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.162649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.162901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.162921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.167062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.167319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.167340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.171488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.171741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.171761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.175832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.176081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.176102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.180147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.180398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.180423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.184545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.184803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.184824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.189004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.189256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.189277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.193438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.193689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.193710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.197636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.197868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.197887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.201567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.201778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.006 [2024-11-20 16:28:08.201808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.006 [2024-11-20 16:28:08.205419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.006 [2024-11-20 16:28:08.205608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.007 [2024-11-20 16:28:08.205627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.007 [2024-11-20 16:28:08.209121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.007 [2024-11-20 16:28:08.209334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.007 [2024-11-20 16:28:08.209354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.007 [2024-11-20 16:28:08.213150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.007 [2024-11-20 16:28:08.213357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.007 [2024-11-20 16:28:08.213376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.007 [2024-11-20 16:28:08.217923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.007 [2024-11-20 16:28:08.218152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.007 [2024-11-20 16:28:08.218173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.007 [2024-11-20 16:28:08.222693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.007 [2024-11-20 16:28:08.222882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.007 [2024-11-20 16:28:08.222901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.007 [2024-11-20 16:28:08.226909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.007 [2024-11-20 16:28:08.227094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.007 [2024-11-20 16:28:08.227112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.007 [2024-11-20 16:28:08.231340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.007 [2024-11-20 16:28:08.231531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.007 [2024-11-20 16:28:08.231550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.236255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.236425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.236444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.240940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.241027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.241048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.245452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.245641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.245659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.249681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.249890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.249910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.253550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.253757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.253776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.257218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.257422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.257440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.260832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.261029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.261047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.264500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.264705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.264724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.268120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.268319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.268337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.271703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.271906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.271924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.275315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.275507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.275527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.279243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.279459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.279477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.283347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.283523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.283543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.287680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.268 [2024-11-20 16:28:08.287862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.268 [2024-11-20 16:28:08.287885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.268 [2024-11-20 16:28:08.292211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.292397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.292415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.296268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.296459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.296479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.300200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.300406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.300425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.304132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.304330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.304349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.307847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.308053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.308071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.311705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.311914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.311933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.315839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.316041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.316061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.319513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.319713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.319733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.323137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.323374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.323394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.326772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.326971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.326989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.330388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.330593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.330614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.333979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.334183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.334209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.337604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.337800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.337818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.341227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.341424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.341443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.344816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.345017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.345036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.348378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.348576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.348593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.351961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.352167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.352186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.355540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.355741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.355761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.359134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.359340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.359359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.362730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.362934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.362952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.366309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.366516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.366536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.369855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.370056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.370075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.373382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.269 [2024-11-20 16:28:08.373588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.269 [2024-11-20 16:28:08.373607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.269 [2024-11-20 16:28:08.376968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.377168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.377187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.380569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.380772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.380792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.384104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.384321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.384343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.387691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.387904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.387924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.391461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.391664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.391684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.395878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.396067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.396086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.400194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.400402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.400430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.404238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.404438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.404458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.408062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.408287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.408305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.412463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.412669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.412689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.416482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.416677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.416697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.420414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.420602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.420622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.424259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.424464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.424484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.428216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.428427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.428447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.432301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.432493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.432511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.436730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.436845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.436864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.441412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.441574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.441593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.446251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.446413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.446431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.450902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.451046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.451065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.455156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.455332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.455351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.270 [2024-11-20 16:28:08.459154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.270 [2024-11-20 16:28:08.459295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.270 [2024-11-20 16:28:08.459314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.271 [2024-11-20 16:28:08.463116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.271 [2024-11-20 16:28:08.463268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.271 [2024-11-20 16:28:08.463287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.271 [2024-11-20 16:28:08.468084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.271 [2024-11-20 16:28:08.468259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.271 [2024-11-20 16:28:08.468279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.271 [2024-11-20 16:28:08.472707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.271 [2024-11-20 16:28:08.472873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.271 [2024-11-20 16:28:08.472893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.271 [2024-11-20 16:28:08.476929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.271 [2024-11-20 16:28:08.477100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.271 [2024-11-20 16:28:08.477119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.271 [2024-11-20 16:28:08.480906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.271 [2024-11-20 16:28:08.481099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.271 [2024-11-20 16:28:08.481117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.271 [2024-11-20 16:28:08.484671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.271 [2024-11-20 16:28:08.484851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.271 [2024-11-20 16:28:08.484869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.271 [2024-11-20 16:28:08.488409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.271 [2024-11-20 16:28:08.488593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.271 [2024-11-20 16:28:08.488612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.271 [2024-11-20 16:28:08.492173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.271 [2024-11-20 16:28:08.492357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.271 [2024-11-20 16:28:08.492379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.271 [2024-11-20 16:28:08.496120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.271 [2024-11-20 16:28:08.496321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.271 [2024-11-20 16:28:08.496340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.532 [2024-11-20 16:28:08.500053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.532 [2024-11-20 16:28:08.500238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.532 [2024-11-20 16:28:08.500257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.532 [2024-11-20 16:28:08.503774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.532 [2024-11-20 16:28:08.503960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.532 [2024-11-20 16:28:08.503978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.532 [2024-11-20 16:28:08.507772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.532 [2024-11-20 16:28:08.507930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.532 [2024-11-20 16:28:08.507948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.532 [2024-11-20 16:28:08.511507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.532 [2024-11-20 16:28:08.511685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.532 [2024-11-20 16:28:08.511705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.515270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.515446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.515466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.519051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.519238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.519256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.522793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.522981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.522999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.526574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.526757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.526777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.530253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.530421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.530449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.533922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.534101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.534119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.537791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.537964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.537983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.542199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.542373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.542391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.546490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.546640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.546659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.550657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.550810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.550829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.554961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.555112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.555130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.559486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.559642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.559661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.564079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.564250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.564269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.568131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.568307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.568326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.571864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.572007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.572026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.575753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.575892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.575910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.579779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.579917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.579935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.583709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.583846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.583864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.587648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.587773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.587791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.591634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.591777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.591795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.595572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.595688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.595710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.599471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.599575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.599593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.603469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.603603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.603622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.607420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.607531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.607549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.611183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.611336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.611355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.533 [2024-11-20 16:28:08.614946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.533 [2024-11-20 16:28:08.615056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.533 [2024-11-20 16:28:08.615075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.618948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.619065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.619084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.623822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.623924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.623943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.627858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.627969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.627987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.631691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.631792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.631810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.635800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.635889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.635907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.639649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.639775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.639793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.643511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.643648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.643667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.647437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.647549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.647567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.651369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.651520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.651538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.655303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.655423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.655441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.659036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.659178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.659197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.663184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.663344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.663363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.668498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.668707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.668727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.674321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.674451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.674469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.680392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.680513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.680532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.687178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.687323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.687342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.693976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.694188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.694216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.701106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 6816.00 IOPS, 852.00 MiB/s [2024-11-20T15:28:08.768Z] [2024-11-20 16:28:08.701271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.701290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.708031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.708276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.708295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.714510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.714791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.714812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.721114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.721387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.721414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.728359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.728681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.728702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.734796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.734994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.735013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.741897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.742131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.742152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.748121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.748467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.748488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.534 [2024-11-20 16:28:08.755078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.534 [2024-11-20 16:28:08.755405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.534 [2024-11-20 16:28:08.755426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.795 [2024-11-20 16:28:08.762088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.795 [2024-11-20 16:28:08.762369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.795 [2024-11-20 16:28:08.762391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.795 [2024-11-20 16:28:08.768786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.795 [2024-11-20 16:28:08.769054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.795 [2024-11-20 16:28:08.769075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.795 [2024-11-20 16:28:08.774604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.795 [2024-11-20 16:28:08.774841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.795 [2024-11-20 16:28:08.774861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.795 [2024-11-20 16:28:08.778898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.795 [2024-11-20 16:28:08.779112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.795 [2024-11-20 16:28:08.779133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.795 [2024-11-20 16:28:08.782915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.795 [2024-11-20 16:28:08.783143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.795 [2024-11-20 16:28:08.783163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.795 [2024-11-20 16:28:08.786852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.795 [2024-11-20 16:28:08.787077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.795 [2024-11-20 16:28:08.787097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.795 [2024-11-20 16:28:08.790884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.795 [2024-11-20 16:28:08.791113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.795 [2024-11-20 16:28:08.791134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.795 [2024-11-20 16:28:08.794839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.795 [2024-11-20 16:28:08.795066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.795 [2024-11-20 16:28:08.795087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.795 [2024-11-20 16:28:08.798667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.795 [2024-11-20 16:28:08.798853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.795 [2024-11-20 16:28:08.798870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.795 [2024-11-20 16:28:08.802335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.795 [2024-11-20 16:28:08.802504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.795 [2024-11-20 16:28:08.802522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.795 [2024-11-20 16:28:08.805969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.795 [2024-11-20 16:28:08.806147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.795 [2024-11-20 16:28:08.806165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.795 [2024-11-20 16:28:08.809602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.795 [2024-11-20 16:28:08.809780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.809799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.813254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.813425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.813443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.816840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.817008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.817028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.820429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.820598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.820616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.824040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.824213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.824232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.827684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.827852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.827870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.831293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.831449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.831468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.834911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.835081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.835099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.838504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.838673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.838691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.842113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.842286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.842309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.845711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.845881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.845899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.849277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.849448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.849469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.852904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.853073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.853091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.856509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.856676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.856696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.860039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.860223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.860241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.863900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.864050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.864068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.867760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.867929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.867947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.871459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.871619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.871640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.875295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.875460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.875480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.879297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.879456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.879477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.883077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.883254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.883273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.886928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.887110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.887129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.890737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.890897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.890918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.894599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.894769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.894790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.898557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.898712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.898731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.902364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.902517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.902536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.906406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.906576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.796 [2024-11-20 16:28:08.906596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.796 [2024-11-20 16:28:08.910391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.796 [2024-11-20 16:28:08.910568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.910588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.914281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.914455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.914475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.918256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.918417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.918435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.922210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.922385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.922404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.925929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.926111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.926129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.929783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.929953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.929971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.934707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.935094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.935114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.939193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.939339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.939357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.943266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.943437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.943462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.947324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.947490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.947512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.951367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.951527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.951548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.955337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.955507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.955528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.959277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.959445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.959466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.963978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.964119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.964139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.968285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.968444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.968463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.973835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.974099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.974121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.979427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.979628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.979646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.985593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.985737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.985757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.992027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.992235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.992254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:08.998736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:08.999000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:08.999020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:09.005694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:09.005848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:09.005867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:09.012270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:09.012513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:09.012533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.797 [2024-11-20 16:28:09.018722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:37.797 [2024-11-20 16:28:09.018926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.797 [2024-11-20 16:28:09.018945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.057 [2024-11-20 16:28:09.025642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.057 [2024-11-20 16:28:09.025781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.057 [2024-11-20 16:28:09.025801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.032453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.032684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.032705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.039759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.039939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.039958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.046424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.046615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.046635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.053329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.053501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.053519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.060188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.060338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.060357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.067351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.067570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.067592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.073229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.073627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.073646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.078165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.078340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.078359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.082258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.082436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.082456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.086639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.086793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.086813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.090850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.091011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.091033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.094975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.095168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.095187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.099188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.099391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.099410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.103160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.103348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.103368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.107667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.107847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.107867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.112790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.112989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.113008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.118290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.118490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.118519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.123685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.123888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.123907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.128052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.128252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.128271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.132245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.132411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.132431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.136557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.136728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.136749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.140842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.141040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.141058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.145312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.145506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.145525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.149349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.149573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.149594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.153699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.153900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.153920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.157519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.157762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.058 [2024-11-20 16:28:09.157782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.058 [2024-11-20 16:28:09.162016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.058 [2024-11-20 16:28:09.162333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.162354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.167721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.167935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.167956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.172639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.172822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.172841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.177983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.178152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.178171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.182655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.182828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.182847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.187287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.187449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.187467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.191738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.191920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.191939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.196270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.196430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.196448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.200740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.200900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.200919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.205010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.205195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.205220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.209043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.209216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.209254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.212817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.212997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.213015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.216604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.216798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.216816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.220387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.220576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.220594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.224170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.224367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.224385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.227962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.228151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.228170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.231720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.231907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.231926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.235525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.235710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.235730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.239336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.239530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.239551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.243305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.243502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.243523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.247377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.247563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.247581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.252138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.252318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.252338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.256670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.256844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.256862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.260675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.260815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.260834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.264625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.264805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.264826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.268678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.268848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.268869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.272728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.272913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.272935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.059 [2024-11-20 16:28:09.276667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.059 [2024-11-20 16:28:09.276831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.059 [2024-11-20 16:28:09.276853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.060 [2024-11-20 16:28:09.280620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.060 [2024-11-20 16:28:09.280798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.060 [2024-11-20 16:28:09.280818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.060 [2024-11-20 16:28:09.284748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.060 [2024-11-20 16:28:09.284940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.060 [2024-11-20 16:28:09.284958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.289504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.289724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.289744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.295074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.295376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.295397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.300555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.300715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.300733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.307617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.307846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.307866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.313683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.313937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.313959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.320271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.320501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.320522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.326925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.327155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.327179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.333367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.333622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.333644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.339777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.340020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.340041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.347041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.347359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.347380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.353545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.353670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.353690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.360554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.360734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.360754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.368119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.368365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.368386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.374726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.374916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.374934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.381355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.381561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.381580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.387675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.387857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.387876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.394490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.394728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.394749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.401331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.401603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.320 [2024-11-20 16:28:09.401624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.320 [2024-11-20 16:28:09.407988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.320 [2024-11-20 16:28:09.408284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.408305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.414761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.415034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.415054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.421399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.421662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.421684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.427644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.428025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.428045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.433623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.433819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.433837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.439887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.440059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.440078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.445899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.446039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.446058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.451913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.452074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.452093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.458570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.458798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.458819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.464991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.465150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.465169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.470669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.470817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.470835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.475225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.475402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.475421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.479435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.479598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.479617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.484475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.484655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.484673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.489442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.489610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.489633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.494429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.494582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.494600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.499155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.499347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.499368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.503522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.503676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.503694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.507686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.507842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.507861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.511961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.512125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.512143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.516786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.516985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.517004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.521257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.521432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.521451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.525271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.525460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.525480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.529167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.529356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.529375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.533174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.533364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.533383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.537156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.537341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.537360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.541102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.541304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.541323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.321 [2024-11-20 16:28:09.545093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.321 [2024-11-20 16:28:09.545284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.321 [2024-11-20 16:28:09.545303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.322 [2024-11-20 16:28:09.549164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.322 [2024-11-20 16:28:09.549360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.322 [2024-11-20 16:28:09.549379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.553418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.553607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.553625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.558318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.558464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.558483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.562557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.562728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.562747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.567068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.567181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.567200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.572185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.572283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.572302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.576801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.576978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.576996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.581790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.581971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.581990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.585907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.586074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.586092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.589896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.590079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.590099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.593887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.594070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.594088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.597884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.598065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.598083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.601992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.602176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.602197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.606013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.606199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.606224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.609999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.610181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.610200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.613950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.614117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.614135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.618013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.618182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.618200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.622865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.623032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.623052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.627297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.627477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.627498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.631310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.631511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.631530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.635350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.635539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.635559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.639187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.639392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.639411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.643044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.643231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.643250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.646870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.647050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.581 [2024-11-20 16:28:09.647068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.581 [2024-11-20 16:28:09.650695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.581 [2024-11-20 16:28:09.650878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.650896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.582 [2024-11-20 16:28:09.654517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.582 [2024-11-20 16:28:09.654697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.654718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.582 [2024-11-20 16:28:09.658392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.582 [2024-11-20 16:28:09.658571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.658591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.582 [2024-11-20 16:28:09.662160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.582 [2024-11-20 16:28:09.662350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.662369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.582 [2024-11-20 16:28:09.665974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.582 [2024-11-20 16:28:09.666156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.666175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.582 [2024-11-20 16:28:09.669810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.582 [2024-11-20 16:28:09.670000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.670021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.582 [2024-11-20 16:28:09.673714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.582 [2024-11-20 16:28:09.673902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.673922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.582 [2024-11-20 16:28:09.677886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.582 [2024-11-20 16:28:09.678053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.678071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.582 [2024-11-20 16:28:09.682535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.582 [2024-11-20 16:28:09.682706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.682727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.582 [2024-11-20 16:28:09.686591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.582 [2024-11-20 16:28:09.686770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.686790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.582 [2024-11-20 16:28:09.690544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.582 [2024-11-20 16:28:09.690727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.690748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.582 [2024-11-20 16:28:09.694543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.582 [2024-11-20 16:28:09.694732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.694753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.582 [2024-11-20 16:28:09.698564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.582 [2024-11-20 16:28:09.698748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.698769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.582 [2024-11-20 16:28:09.702533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x198a4c0) with pdu=0x200016eff3c8 00:26:38.582 [2024-11-20 16:28:09.703553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.582 [2024-11-20 16:28:09.703574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.582 6663.50 IOPS, 832.94 MiB/s 00:26:38.582 Latency(us) 00:26:38.582 [2024-11-20T15:28:09.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.582 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:38.582 nvme0n1 : 2.00 6661.72 832.72 0.00 0.00 2397.90 1630.60 11359.57 00:26:38.582 [2024-11-20T15:28:09.816Z] =================================================================================================================== 00:26:38.582 [2024-11-20T15:28:09.816Z] Total : 6661.72 832.72 0.00 0.00 2397.90 1630.60 11359.57 00:26:38.582 { 00:26:38.582 "results": [ 00:26:38.582 { 00:26:38.582 "job": "nvme0n1", 00:26:38.582 "core_mask": "0x2", 00:26:38.582 "workload": "randwrite", 00:26:38.582 "status": "finished", 00:26:38.582 "queue_depth": 16, 00:26:38.582 "io_size": 131072, 00:26:38.582 "runtime": 2.002935, 00:26:38.582 "iops": 6661.723920147184, 00:26:38.582 "mibps": 832.715490018398, 00:26:38.582 "io_failed": 0, 00:26:38.582 "io_timeout": 0, 00:26:38.582 "avg_latency_us": 2397.903997316231, 00:26:38.582 "min_latency_us": 1630.5980952380953, 00:26:38.582 "max_latency_us": 11359.573333333334 00:26:38.582 } 00:26:38.582 ], 00:26:38.582 "core_count": 1 00:26:38.582 } 00:26:38.582 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:38.582 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:38.582 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:38.582 | .driver_specific 00:26:38.582 | .nvme_error 00:26:38.582 | .status_code 00:26:38.582 | .command_transient_transport_error' 00:26:38.582 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:38.841 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 431 > 0 )) 00:26:38.841 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2069948 00:26:38.841 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2069948 ']' 00:26:38.841 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2069948 00:26:38.841 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:38.841 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.841 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2069948 00:26:38.841 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:38.841 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:38.841 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2069948' 00:26:38.841 killing process with pid 2069948 00:26:38.841 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2069948 00:26:38.841 Received shutdown signal, test time was about 2.000000 seconds 00:26:38.841 00:26:38.841 Latency(us) 00:26:38.841 [2024-11-20T15:28:10.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.841 [2024-11-20T15:28:10.075Z] =================================================================================================================== 00:26:38.841 [2024-11-20T15:28:10.075Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.841 16:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2069948 00:26:39.099 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2068098 00:26:39.099 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2068098 ']' 00:26:39.099 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2068098 00:26:39.099 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:39.099 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.099 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2068098 00:26:39.099 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:39.099 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:39.099 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2068098' 00:26:39.099 killing process with pid 2068098 00:26:39.099 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2068098 00:26:39.099 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2068098 00:26:39.359 00:26:39.359 real 0m13.967s 00:26:39.359 user 0m26.462s 00:26:39.359 sys 0m4.761s 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.359 ************************************ 00:26:39.359 END TEST nvmf_digest_error 00:26:39.359 ************************************ 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.359 rmmod nvme_tcp 00:26:39.359 rmmod nvme_fabrics 00:26:39.359 rmmod nvme_keyring 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2068098 ']' 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2068098 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2068098 ']' 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2068098 00:26:39.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2068098) - No such process 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2068098 is not found' 00:26:39.359 Process with pid 2068098 is not found 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.359 16:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:41.897 00:26:41.897 real 0m36.248s 00:26:41.897 user 0m54.753s 00:26:41.897 sys 0m13.952s 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:41.897 ************************************ 00:26:41.897 END TEST nvmf_digest 00:26:41.897 ************************************ 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.897 ************************************ 00:26:41.897 START TEST nvmf_bdevperf 00:26:41.897 ************************************ 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:41.897 * Looking for test storage... 00:26:41.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:41.897 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:41.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.898 --rc genhtml_branch_coverage=1 00:26:41.898 --rc genhtml_function_coverage=1 00:26:41.898 --rc genhtml_legend=1 00:26:41.898 --rc geninfo_all_blocks=1 00:26:41.898 --rc geninfo_unexecuted_blocks=1 00:26:41.898 00:26:41.898 ' 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:41.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.898 --rc genhtml_branch_coverage=1 00:26:41.898 --rc genhtml_function_coverage=1 00:26:41.898 --rc genhtml_legend=1 00:26:41.898 --rc geninfo_all_blocks=1 00:26:41.898 --rc geninfo_unexecuted_blocks=1 00:26:41.898 00:26:41.898 ' 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:41.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.898 --rc genhtml_branch_coverage=1 00:26:41.898 --rc genhtml_function_coverage=1 00:26:41.898 --rc genhtml_legend=1 00:26:41.898 --rc geninfo_all_blocks=1 00:26:41.898 --rc geninfo_unexecuted_blocks=1 00:26:41.898 00:26:41.898 ' 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:41.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.898 --rc genhtml_branch_coverage=1 00:26:41.898 --rc genhtml_function_coverage=1 00:26:41.898 --rc genhtml_legend=1 00:26:41.898 --rc geninfo_all_blocks=1 00:26:41.898 --rc geninfo_unexecuted_blocks=1 00:26:41.898 00:26:41.898 ' 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:41.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:41.898 16:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:48.470 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.470 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:48.471 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:48.471 Found net devices under 0000:86:00.0: cvl_0_0 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:48.471 Found net devices under 0000:86:00.1: cvl_0_1 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:26:48.471 00:26:48.471 --- 10.0.0.2 ping statistics --- 00:26:48.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.471 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:26:48.471 00:26:48.471 --- 10.0.0.1 ping statistics --- 00:26:48.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.471 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2073961 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2073961 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2073961 ']' 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.471 16:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.471 [2024-11-20 16:28:18.804830] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:48.471 [2024-11-20 16:28:18.804872] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.471 [2024-11-20 16:28:18.881670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:48.471 [2024-11-20 16:28:18.923332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.471 [2024-11-20 16:28:18.923366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.471 [2024-11-20 16:28:18.923373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.471 [2024-11-20 16:28:18.923379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.471 [2024-11-20 16:28:18.923384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.471 [2024-11-20 16:28:18.924822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.471 [2024-11-20 16:28:18.924930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.471 [2024-11-20 16:28:18.924930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.471 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.471 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:48.471 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.471 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:48.471 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.471 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.471 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:48.471 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.471 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.471 [2024-11-20 16:28:19.062003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.472 Malloc0 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.472 [2024-11-20 16:28:19.121683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:48.472 { 00:26:48.472 "params": { 00:26:48.472 "name": "Nvme$subsystem", 00:26:48.472 "trtype": "$TEST_TRANSPORT", 00:26:48.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:48.472 "adrfam": "ipv4", 00:26:48.472 "trsvcid": "$NVMF_PORT", 00:26:48.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:48.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:48.472 "hdgst": ${hdgst:-false}, 00:26:48.472 "ddgst": ${ddgst:-false} 00:26:48.472 }, 00:26:48.472 "method": "bdev_nvme_attach_controller" 00:26:48.472 } 00:26:48.472 EOF 00:26:48.472 )") 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:48.472 16:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:48.472 "params": { 00:26:48.472 "name": "Nvme1", 00:26:48.472 "trtype": "tcp", 00:26:48.472 "traddr": "10.0.0.2", 00:26:48.472 "adrfam": "ipv4", 00:26:48.472 "trsvcid": "4420", 00:26:48.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:48.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:48.472 "hdgst": false, 00:26:48.472 "ddgst": false 00:26:48.472 }, 00:26:48.472 "method": "bdev_nvme_attach_controller" 00:26:48.472 }' 00:26:48.472 [2024-11-20 16:28:19.173668] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:48.472 [2024-11-20 16:28:19.173711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073984 ] 00:26:48.472 [2024-11-20 16:28:19.246711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.472 [2024-11-20 16:28:19.287563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.472 Running I/O for 1 seconds... 00:26:49.406 11274.00 IOPS, 44.04 MiB/s 00:26:49.406 Latency(us) 00:26:49.406 [2024-11-20T15:28:20.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.406 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:49.406 Verification LBA range: start 0x0 length 0x4000 00:26:49.406 Nvme1n1 : 1.01 11319.41 44.22 0.00 0.00 11264.88 2356.18 12233.39 00:26:49.406 [2024-11-20T15:28:20.640Z] =================================================================================================================== 00:26:49.406 [2024-11-20T15:28:20.640Z] Total : 11319.41 44.22 0.00 0.00 11264.88 2356.18 12233.39 00:26:49.664 16:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2074225 00:26:49.664 16:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:49.664 16:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:49.664 16:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:49.664 16:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:49.664 16:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:49.664 16:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:49.664 16:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:49.664 { 00:26:49.664 "params": { 00:26:49.664 "name": "Nvme$subsystem", 00:26:49.664 "trtype": "$TEST_TRANSPORT", 00:26:49.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.664 "adrfam": "ipv4", 00:26:49.664 "trsvcid": "$NVMF_PORT", 00:26:49.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.664 "hdgst": ${hdgst:-false}, 00:26:49.664 "ddgst": ${ddgst:-false} 00:26:49.664 }, 00:26:49.664 "method": "bdev_nvme_attach_controller" 00:26:49.664 } 00:26:49.664 EOF 00:26:49.664 )") 00:26:49.664 16:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:49.664 16:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:49.664 16:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:49.664 16:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:49.664 "params": { 00:26:49.664 "name": "Nvme1", 00:26:49.664 "trtype": "tcp", 00:26:49.664 "traddr": "10.0.0.2", 00:26:49.664 "adrfam": "ipv4", 00:26:49.664 "trsvcid": "4420", 00:26:49.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:49.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:49.664 "hdgst": false, 00:26:49.664 "ddgst": false 00:26:49.664 }, 00:26:49.664 "method": "bdev_nvme_attach_controller" 00:26:49.664 }' 00:26:49.664 [2024-11-20 16:28:20.830564] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:49.664 [2024-11-20 16:28:20.830614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074225 ] 00:26:49.922 [2024-11-20 16:28:20.905289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.922 [2024-11-20 16:28:20.943593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.181 Running I/O for 15 seconds... 00:26:52.049 11239.00 IOPS, 43.90 MiB/s [2024-11-20T15:28:23.854Z] 11399.50 IOPS, 44.53 MiB/s [2024-11-20T15:28:23.854Z] 16:28:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2073961 00:26:52.620 16:28:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:52.620 [2024-11-20 16:28:23.797786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.797825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.797842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.797852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.797862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.797870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.797880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.797893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.797904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.797911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.797920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.797928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.797937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.797944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.797953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.797960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.797968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.797975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.797985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.797993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-11-20 16:28:23.798472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.620 [2024-11-20 16:28:23.798479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.798598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.798612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.798628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.798642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.798657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.798671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.798686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.621 [2024-11-20 16:28:23.798956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.798971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.798985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.798994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.799002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.799010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.799017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.799025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.799031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.799039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.799046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.799055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.621 [2024-11-20 16:28:23.799061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.621 [2024-11-20 16:28:23.799070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.622 [2024-11-20 16:28:23.799657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.622 [2024-11-20 16:28:23.799663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.623 [2024-11-20 16:28:23.799896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.799904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102fae0 is same with the state(6) to be set 00:26:52.623 [2024-11-20 16:28:23.799912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:52.623 [2024-11-20 16:28:23.799917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:52.623 [2024-11-20 16:28:23.799924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99200 len:8 PRP1 0x0 PRP2 0x0 00:26:52.623 [2024-11-20 16:28:23.799933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.623 [2024-11-20 16:28:23.802758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.623 [2024-11-20 16:28:23.802816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.623 [2024-11-20 16:28:23.803386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-11-20 16:28:23.803402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.623 [2024-11-20 16:28:23.803410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.623 [2024-11-20 16:28:23.803585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.623 [2024-11-20 16:28:23.803759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.623 [2024-11-20 16:28:23.803767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.623 [2024-11-20 16:28:23.803774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.623 [2024-11-20 16:28:23.803781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.623 [2024-11-20 16:28:23.816013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.623 [2024-11-20 16:28:23.816390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-11-20 16:28:23.816408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.623 [2024-11-20 16:28:23.816415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.623 [2024-11-20 16:28:23.816590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.623 [2024-11-20 16:28:23.816765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.623 [2024-11-20 16:28:23.816773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.623 [2024-11-20 16:28:23.816780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.623 [2024-11-20 16:28:23.816787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.623 [2024-11-20 16:28:23.828762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.623 [2024-11-20 16:28:23.829142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-11-20 16:28:23.829160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.623 [2024-11-20 16:28:23.829167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.623 [2024-11-20 16:28:23.829342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.623 [2024-11-20 16:28:23.829512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.623 [2024-11-20 16:28:23.829520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.623 [2024-11-20 16:28:23.829527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.623 [2024-11-20 16:28:23.829533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.623 [2024-11-20 16:28:23.841631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.623 [2024-11-20 16:28:23.842073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-11-20 16:28:23.842089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.623 [2024-11-20 16:28:23.842100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.623 [2024-11-20 16:28:23.842277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.623 [2024-11-20 16:28:23.842446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.623 [2024-11-20 16:28:23.842454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.623 [2024-11-20 16:28:23.842460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.623 [2024-11-20 16:28:23.842467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.883 [2024-11-20 16:28:23.854554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.883 [2024-11-20 16:28:23.855027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.883 [2024-11-20 16:28:23.855045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.883 [2024-11-20 16:28:23.855053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.883 [2024-11-20 16:28:23.855230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.883 [2024-11-20 16:28:23.855400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.883 [2024-11-20 16:28:23.855409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.884 [2024-11-20 16:28:23.855415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.884 [2024-11-20 16:28:23.855422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.884 [2024-11-20 16:28:23.867367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.884 [2024-11-20 16:28:23.867696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.884 [2024-11-20 16:28:23.867745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.884 [2024-11-20 16:28:23.867770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.884 [2024-11-20 16:28:23.868371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.884 [2024-11-20 16:28:23.868807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.884 [2024-11-20 16:28:23.868816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.884 [2024-11-20 16:28:23.868823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.884 [2024-11-20 16:28:23.868829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.884 [2024-11-20 16:28:23.880323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.884 [2024-11-20 16:28:23.880692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.884 [2024-11-20 16:28:23.880709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.884 [2024-11-20 16:28:23.880716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.884 [2024-11-20 16:28:23.880885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.884 [2024-11-20 16:28:23.881058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.884 [2024-11-20 16:28:23.881067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.884 [2024-11-20 16:28:23.881075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.884 [2024-11-20 16:28:23.881081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.884 [2024-11-20 16:28:23.893173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.884 [2024-11-20 16:28:23.893521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.884 [2024-11-20 16:28:23.893539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.884 [2024-11-20 16:28:23.893546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.884 [2024-11-20 16:28:23.893714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.884 [2024-11-20 16:28:23.893883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.884 [2024-11-20 16:28:23.893892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.884 [2024-11-20 16:28:23.893898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.884 [2024-11-20 16:28:23.893904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.884 [2024-11-20 16:28:23.905958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.884 [2024-11-20 16:28:23.906419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.884 [2024-11-20 16:28:23.906436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.884 [2024-11-20 16:28:23.906444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.884 [2024-11-20 16:28:23.906612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.884 [2024-11-20 16:28:23.906781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.884 [2024-11-20 16:28:23.906789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.884 [2024-11-20 16:28:23.906795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.884 [2024-11-20 16:28:23.906801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.884 [2024-11-20 16:28:23.918765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.884 [2024-11-20 16:28:23.919178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.884 [2024-11-20 16:28:23.919194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.884 [2024-11-20 16:28:23.919207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.884 [2024-11-20 16:28:23.919376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.884 [2024-11-20 16:28:23.919545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.884 [2024-11-20 16:28:23.919553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.884 [2024-11-20 16:28:23.919562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.884 [2024-11-20 16:28:23.919569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.884 [2024-11-20 16:28:23.931528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.884 [2024-11-20 16:28:23.931912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.884 [2024-11-20 16:28:23.931929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.884 [2024-11-20 16:28:23.931937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.884 [2024-11-20 16:28:23.932107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.884 [2024-11-20 16:28:23.932283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.884 [2024-11-20 16:28:23.932292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.884 [2024-11-20 16:28:23.932299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.884 [2024-11-20 16:28:23.932305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.884 [2024-11-20 16:28:23.944379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.884 [2024-11-20 16:28:23.944733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.884 [2024-11-20 16:28:23.944749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.884 [2024-11-20 16:28:23.944757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.884 [2024-11-20 16:28:23.944926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.884 [2024-11-20 16:28:23.945095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.884 [2024-11-20 16:28:23.945104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.884 [2024-11-20 16:28:23.945110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.884 [2024-11-20 16:28:23.945117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.884 [2024-11-20 16:28:23.957251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.884 [2024-11-20 16:28:23.957634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.884 [2024-11-20 16:28:23.957679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.884 [2024-11-20 16:28:23.957703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.884 [2024-11-20 16:28:23.958156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.884 [2024-11-20 16:28:23.958332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.884 [2024-11-20 16:28:23.958341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.884 [2024-11-20 16:28:23.958347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.884 [2024-11-20 16:28:23.958353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.884 [2024-11-20 16:28:23.970000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.884 [2024-11-20 16:28:23.970449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.884 [2024-11-20 16:28:23.970466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.884 [2024-11-20 16:28:23.970475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.884 [2024-11-20 16:28:23.970645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.884 [2024-11-20 16:28:23.970819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.884 [2024-11-20 16:28:23.970828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.884 [2024-11-20 16:28:23.970834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.884 [2024-11-20 16:28:23.970840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.884 [2024-11-20 16:28:23.982800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.884 [2024-11-20 16:28:23.983156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.884 [2024-11-20 16:28:23.983174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.884 [2024-11-20 16:28:23.983182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.885 [2024-11-20 16:28:23.983357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.885 [2024-11-20 16:28:23.983526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.885 [2024-11-20 16:28:23.983538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.885 [2024-11-20 16:28:23.983546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.885 [2024-11-20 16:28:23.983553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.885 [2024-11-20 16:28:23.995756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.885 [2024-11-20 16:28:23.996192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.885 [2024-11-20 16:28:23.996214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.885 [2024-11-20 16:28:23.996222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.885 [2024-11-20 16:28:23.996395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.885 [2024-11-20 16:28:23.996570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.885 [2024-11-20 16:28:23.996578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.885 [2024-11-20 16:28:23.996584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.885 [2024-11-20 16:28:23.996591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.885 [2024-11-20 16:28:24.008850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.885 [2024-11-20 16:28:24.009274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.885 [2024-11-20 16:28:24.009292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.885 [2024-11-20 16:28:24.009303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.885 [2024-11-20 16:28:24.009477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.885 [2024-11-20 16:28:24.009651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.885 [2024-11-20 16:28:24.009659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.885 [2024-11-20 16:28:24.009665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.885 [2024-11-20 16:28:24.009672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.885 [2024-11-20 16:28:24.021930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.885 [2024-11-20 16:28:24.022338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.885 [2024-11-20 16:28:24.022356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.885 [2024-11-20 16:28:24.022364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.885 [2024-11-20 16:28:24.022537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.885 [2024-11-20 16:28:24.022712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.885 [2024-11-20 16:28:24.022720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.885 [2024-11-20 16:28:24.022726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.885 [2024-11-20 16:28:24.022733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.885 [2024-11-20 16:28:24.034983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.885 [2024-11-20 16:28:24.035391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.885 [2024-11-20 16:28:24.035407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.885 [2024-11-20 16:28:24.035415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.885 [2024-11-20 16:28:24.035588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.885 [2024-11-20 16:28:24.035763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.885 [2024-11-20 16:28:24.035771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.885 [2024-11-20 16:28:24.035777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.885 [2024-11-20 16:28:24.035784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.885 [2024-11-20 16:28:24.048160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.885 [2024-11-20 16:28:24.048579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.885 [2024-11-20 16:28:24.048597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.885 [2024-11-20 16:28:24.048606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.885 [2024-11-20 16:28:24.048802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.885 [2024-11-20 16:28:24.049002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.885 [2024-11-20 16:28:24.049013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.885 [2024-11-20 16:28:24.049021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.885 [2024-11-20 16:28:24.049028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.885 [2024-11-20 16:28:24.061178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.885 [2024-11-20 16:28:24.061612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.885 [2024-11-20 16:28:24.061630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.885 [2024-11-20 16:28:24.061637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.885 [2024-11-20 16:28:24.061811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.885 [2024-11-20 16:28:24.061986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.885 [2024-11-20 16:28:24.061994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.885 [2024-11-20 16:28:24.062001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.885 [2024-11-20 16:28:24.062008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.885 [2024-11-20 16:28:24.074285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.885 [2024-11-20 16:28:24.074650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.885 [2024-11-20 16:28:24.074668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.885 [2024-11-20 16:28:24.074677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.885 [2024-11-20 16:28:24.074851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.885 [2024-11-20 16:28:24.075026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.885 [2024-11-20 16:28:24.075035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.885 [2024-11-20 16:28:24.075041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.885 [2024-11-20 16:28:24.075048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.885 [2024-11-20 16:28:24.087494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.885 [2024-11-20 16:28:24.087943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.885 [2024-11-20 16:28:24.087962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.885 [2024-11-20 16:28:24.087970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.885 [2024-11-20 16:28:24.088154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.885 [2024-11-20 16:28:24.088346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.885 [2024-11-20 16:28:24.088356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.885 [2024-11-20 16:28:24.088366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.885 [2024-11-20 16:28:24.088373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.885 [2024-11-20 16:28:24.100599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.885 [2024-11-20 16:28:24.101037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.885 [2024-11-20 16:28:24.101054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:52.885 [2024-11-20 16:28:24.101062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:52.885 [2024-11-20 16:28:24.101252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:52.885 [2024-11-20 16:28:24.101436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.885 [2024-11-20 16:28:24.101445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.885 [2024-11-20 16:28:24.101452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.885 [2024-11-20 16:28:24.101459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.146 [2024-11-20 16:28:24.113788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.146 [2024-11-20 16:28:24.114126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.146 [2024-11-20 16:28:24.114145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.146 [2024-11-20 16:28:24.114153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.146 [2024-11-20 16:28:24.114356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.146 [2024-11-20 16:28:24.114544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.146 [2024-11-20 16:28:24.114553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.146 [2024-11-20 16:28:24.114559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.146 [2024-11-20 16:28:24.114566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.146 [2024-11-20 16:28:24.126760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.146 [2024-11-20 16:28:24.127221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.146 [2024-11-20 16:28:24.127270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.146 [2024-11-20 16:28:24.127295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.146 [2024-11-20 16:28:24.127881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.146 [2024-11-20 16:28:24.128430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.146 [2024-11-20 16:28:24.128440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.146 [2024-11-20 16:28:24.128446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.146 [2024-11-20 16:28:24.128454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.146 [2024-11-20 16:28:24.139771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.146 [2024-11-20 16:28:24.140114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.146 [2024-11-20 16:28:24.140131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.146 [2024-11-20 16:28:24.140138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.146 [2024-11-20 16:28:24.140311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.146 [2024-11-20 16:28:24.140479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.146 [2024-11-20 16:28:24.140488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.146 [2024-11-20 16:28:24.140494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.146 [2024-11-20 16:28:24.140500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.146 [2024-11-20 16:28:24.152597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.146 [2024-11-20 16:28:24.152946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.146 [2024-11-20 16:28:24.152962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.146 [2024-11-20 16:28:24.152969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.146 [2024-11-20 16:28:24.153138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.146 [2024-11-20 16:28:24.153313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.146 [2024-11-20 16:28:24.153322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.146 [2024-11-20 16:28:24.153328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.146 [2024-11-20 16:28:24.153335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.146 [2024-11-20 16:28:24.165434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.146 [2024-11-20 16:28:24.165881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.146 [2024-11-20 16:28:24.165897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.146 [2024-11-20 16:28:24.165905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.146 [2024-11-20 16:28:24.166072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.146 [2024-11-20 16:28:24.166249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.147 [2024-11-20 16:28:24.166258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.147 [2024-11-20 16:28:24.166264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.147 [2024-11-20 16:28:24.166271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.147 [2024-11-20 16:28:24.178215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.147 [2024-11-20 16:28:24.178597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.147 [2024-11-20 16:28:24.178641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.147 [2024-11-20 16:28:24.178673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.147 [2024-11-20 16:28:24.179273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.147 [2024-11-20 16:28:24.179682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.147 [2024-11-20 16:28:24.179690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.147 [2024-11-20 16:28:24.179696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.147 [2024-11-20 16:28:24.179702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.147 [2024-11-20 16:28:24.191152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.147 [2024-11-20 16:28:24.191505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.147 [2024-11-20 16:28:24.191522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.147 [2024-11-20 16:28:24.191529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.147 [2024-11-20 16:28:24.191697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.147 [2024-11-20 16:28:24.191865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.147 [2024-11-20 16:28:24.191873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.147 [2024-11-20 16:28:24.191879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.147 [2024-11-20 16:28:24.191885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.147 [2024-11-20 16:28:24.203911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.147 [2024-11-20 16:28:24.204331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.147 [2024-11-20 16:28:24.204348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.147 [2024-11-20 16:28:24.204355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.147 [2024-11-20 16:28:24.204515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.147 [2024-11-20 16:28:24.204673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.147 [2024-11-20 16:28:24.204681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.147 [2024-11-20 16:28:24.204686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.147 [2024-11-20 16:28:24.204692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.147 [2024-11-20 16:28:24.216752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.147 [2024-11-20 16:28:24.217194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.147 [2024-11-20 16:28:24.217251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.147 [2024-11-20 16:28:24.217275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.147 [2024-11-20 16:28:24.217689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.147 [2024-11-20 16:28:24.217858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.147 [2024-11-20 16:28:24.217869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.147 [2024-11-20 16:28:24.217876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.147 [2024-11-20 16:28:24.217882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.147 [2024-11-20 16:28:24.229516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.147 [2024-11-20 16:28:24.229935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.147 [2024-11-20 16:28:24.229950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.147 [2024-11-20 16:28:24.229957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.147 [2024-11-20 16:28:24.230116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.147 [2024-11-20 16:28:24.230299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.147 [2024-11-20 16:28:24.230308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.147 [2024-11-20 16:28:24.230314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.147 [2024-11-20 16:28:24.230321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.147 [2024-11-20 16:28:24.242248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.147 [2024-11-20 16:28:24.242668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.147 [2024-11-20 16:28:24.242684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.147 [2024-11-20 16:28:24.242691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.147 [2024-11-20 16:28:24.242850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.147 [2024-11-20 16:28:24.243009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.147 [2024-11-20 16:28:24.243016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.147 [2024-11-20 16:28:24.243022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.147 [2024-11-20 16:28:24.243028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.147 [2024-11-20 16:28:24.255035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.147 [2024-11-20 16:28:24.255489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.147 [2024-11-20 16:28:24.255506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.147 [2024-11-20 16:28:24.255513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.147 [2024-11-20 16:28:24.255681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.147 [2024-11-20 16:28:24.255850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.147 [2024-11-20 16:28:24.255858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.147 [2024-11-20 16:28:24.255864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.147 [2024-11-20 16:28:24.255873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.147 [2024-11-20 16:28:24.267840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.147 [2024-11-20 16:28:24.268244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.147 [2024-11-20 16:28:24.268288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.147 [2024-11-20 16:28:24.268312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.147 [2024-11-20 16:28:24.268575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.147 [2024-11-20 16:28:24.268736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.147 [2024-11-20 16:28:24.268744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.147 [2024-11-20 16:28:24.268749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.147 [2024-11-20 16:28:24.268755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.147 9552.33 IOPS, 37.31 MiB/s [2024-11-20T15:28:24.381Z] [2024-11-20 16:28:24.280673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.147 [2024-11-20 16:28:24.281069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.147 [2024-11-20 16:28:24.281085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.147 [2024-11-20 16:28:24.281092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.147 [2024-11-20 16:28:24.281273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.147 [2024-11-20 16:28:24.281442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.147 [2024-11-20 16:28:24.281451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.147 [2024-11-20 16:28:24.281457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.147 [2024-11-20 16:28:24.281463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.147 [2024-11-20 16:28:24.293549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.147 [2024-11-20 16:28:24.293974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.147 [2024-11-20 16:28:24.293990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.147 [2024-11-20 16:28:24.293997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.148 [2024-11-20 16:28:24.294157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.148 [2024-11-20 16:28:24.294345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.148 [2024-11-20 16:28:24.294354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.148 [2024-11-20 16:28:24.294360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.148 [2024-11-20 16:28:24.294366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.148 [2024-11-20 16:28:24.306294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.148 [2024-11-20 16:28:24.306666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.148 [2024-11-20 16:28:24.306682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.148 [2024-11-20 16:28:24.306690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.148 [2024-11-20 16:28:24.306850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.148 [2024-11-20 16:28:24.307010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.148 [2024-11-20 16:28:24.307018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.148 [2024-11-20 16:28:24.307025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.148 [2024-11-20 16:28:24.307032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.148 [2024-11-20 16:28:24.319376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.148 [2024-11-20 16:28:24.319781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.148 [2024-11-20 16:28:24.319798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.148 [2024-11-20 16:28:24.319805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.148 [2024-11-20 16:28:24.319979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.148 [2024-11-20 16:28:24.320154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.148 [2024-11-20 16:28:24.320162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.148 [2024-11-20 16:28:24.320169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.148 [2024-11-20 16:28:24.320175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.148 [2024-11-20 16:28:24.332225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.148 [2024-11-20 16:28:24.332620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.148 [2024-11-20 16:28:24.332636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.148 [2024-11-20 16:28:24.332643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.148 [2024-11-20 16:28:24.332802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.148 [2024-11-20 16:28:24.332961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.148 [2024-11-20 16:28:24.332969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.148 [2024-11-20 16:28:24.332975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.148 [2024-11-20 16:28:24.332981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.148 [2024-11-20 16:28:24.345072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.148 [2024-11-20 16:28:24.345515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.148 [2024-11-20 16:28:24.345560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.148 [2024-11-20 16:28:24.345591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.148 [2024-11-20 16:28:24.346055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.148 [2024-11-20 16:28:24.346230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.148 [2024-11-20 16:28:24.346238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.148 [2024-11-20 16:28:24.346244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.148 [2024-11-20 16:28:24.346251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.148 [2024-11-20 16:28:24.357831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.148 [2024-11-20 16:28:24.358250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.148 [2024-11-20 16:28:24.358267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.148 [2024-11-20 16:28:24.358274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.148 [2024-11-20 16:28:24.358433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.148 [2024-11-20 16:28:24.358592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.148 [2024-11-20 16:28:24.358600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.148 [2024-11-20 16:28:24.358606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.148 [2024-11-20 16:28:24.358612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.148 [2024-11-20 16:28:24.370707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.148 [2024-11-20 16:28:24.371150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.148 [2024-11-20 16:28:24.371172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.148 [2024-11-20 16:28:24.371180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.148 [2024-11-20 16:28:24.371366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.148 [2024-11-20 16:28:24.371540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.148 [2024-11-20 16:28:24.371549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.148 [2024-11-20 16:28:24.371555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.148 [2024-11-20 16:28:24.371562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.408 [2024-11-20 16:28:24.383754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.408 [2024-11-20 16:28:24.384184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.408 [2024-11-20 16:28:24.384208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.408 [2024-11-20 16:28:24.384216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.408 [2024-11-20 16:28:24.384400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.408 [2024-11-20 16:28:24.384570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.408 [2024-11-20 16:28:24.384582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.408 [2024-11-20 16:28:24.384589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.408 [2024-11-20 16:28:24.384595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.408 [2024-11-20 16:28:24.396547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.408 [2024-11-20 16:28:24.396971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.408 [2024-11-20 16:28:24.396988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.408 [2024-11-20 16:28:24.396995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.408 [2024-11-20 16:28:24.397155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.408 [2024-11-20 16:28:24.397342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.408 [2024-11-20 16:28:24.397351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.409 [2024-11-20 16:28:24.397357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.409 [2024-11-20 16:28:24.397363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.409 [2024-11-20 16:28:24.409275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.409 [2024-11-20 16:28:24.409620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.409 [2024-11-20 16:28:24.409637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.409 [2024-11-20 16:28:24.409644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.409 [2024-11-20 16:28:24.409804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.409 [2024-11-20 16:28:24.409963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.409 [2024-11-20 16:28:24.409971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.409 [2024-11-20 16:28:24.409977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.409 [2024-11-20 16:28:24.409983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.409 [2024-11-20 16:28:24.422120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.409 [2024-11-20 16:28:24.422559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.409 [2024-11-20 16:28:24.422576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.409 [2024-11-20 16:28:24.422583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.409 [2024-11-20 16:28:24.422752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.409 [2024-11-20 16:28:24.422924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.409 [2024-11-20 16:28:24.422932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.409 [2024-11-20 16:28:24.422938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.409 [2024-11-20 16:28:24.422948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.409 [2024-11-20 16:28:24.434879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.409 [2024-11-20 16:28:24.435274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.409 [2024-11-20 16:28:24.435291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.409 [2024-11-20 16:28:24.435298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.409 [2024-11-20 16:28:24.435458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.409 [2024-11-20 16:28:24.435617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.409 [2024-11-20 16:28:24.435625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.409 [2024-11-20 16:28:24.435631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.409 [2024-11-20 16:28:24.435637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.409 [2024-11-20 16:28:24.447703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.409 [2024-11-20 16:28:24.448055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.409 [2024-11-20 16:28:24.448071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.409 [2024-11-20 16:28:24.448078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.409 [2024-11-20 16:28:24.448253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.409 [2024-11-20 16:28:24.448423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.409 [2024-11-20 16:28:24.448431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.409 [2024-11-20 16:28:24.448437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.409 [2024-11-20 16:28:24.448444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.409 [2024-11-20 16:28:24.460457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.409 [2024-11-20 16:28:24.460876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.409 [2024-11-20 16:28:24.460892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.409 [2024-11-20 16:28:24.460898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.409 [2024-11-20 16:28:24.461057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.409 [2024-11-20 16:28:24.461222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.409 [2024-11-20 16:28:24.461246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.409 [2024-11-20 16:28:24.461253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.409 [2024-11-20 16:28:24.461259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.409 [2024-11-20 16:28:24.473347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.409 [2024-11-20 16:28:24.473696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.409 [2024-11-20 16:28:24.473741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.409 [2024-11-20 16:28:24.473765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.409 [2024-11-20 16:28:24.474280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.409 [2024-11-20 16:28:24.474451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.409 [2024-11-20 16:28:24.474459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.409 [2024-11-20 16:28:24.474465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.409 [2024-11-20 16:28:24.474471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.409 [2024-11-20 16:28:24.486162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.409 [2024-11-20 16:28:24.486506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.409 [2024-11-20 16:28:24.486522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.409 [2024-11-20 16:28:24.486529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.409 [2024-11-20 16:28:24.486688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.409 [2024-11-20 16:28:24.486848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.409 [2024-11-20 16:28:24.486855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.409 [2024-11-20 16:28:24.486861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.409 [2024-11-20 16:28:24.486867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.409 [2024-11-20 16:28:24.498934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.409 [2024-11-20 16:28:24.499341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.409 [2024-11-20 16:28:24.499357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.409 [2024-11-20 16:28:24.499364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.409 [2024-11-20 16:28:24.499524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.409 [2024-11-20 16:28:24.499683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.409 [2024-11-20 16:28:24.499691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.409 [2024-11-20 16:28:24.499697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.409 [2024-11-20 16:28:24.499703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.409 [2024-11-20 16:28:24.511765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.409 [2024-11-20 16:28:24.512186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.409 [2024-11-20 16:28:24.512207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.409 [2024-11-20 16:28:24.512214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.409 [2024-11-20 16:28:24.512399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.409 [2024-11-20 16:28:24.512568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.409 [2024-11-20 16:28:24.512576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.409 [2024-11-20 16:28:24.512582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.409 [2024-11-20 16:28:24.512588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.409 [2024-11-20 16:28:24.524510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.409 [2024-11-20 16:28:24.524898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.409 [2024-11-20 16:28:24.524946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.409 [2024-11-20 16:28:24.524970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.409 [2024-11-20 16:28:24.525511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.409 [2024-11-20 16:28:24.525673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.410 [2024-11-20 16:28:24.525680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.410 [2024-11-20 16:28:24.525686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.410 [2024-11-20 16:28:24.525692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.410 [2024-11-20 16:28:24.537374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.410 [2024-11-20 16:28:24.537764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.410 [2024-11-20 16:28:24.537780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.410 [2024-11-20 16:28:24.537787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.410 [2024-11-20 16:28:24.537946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.410 [2024-11-20 16:28:24.538105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.410 [2024-11-20 16:28:24.538112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.410 [2024-11-20 16:28:24.538118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.410 [2024-11-20 16:28:24.538124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.410 [2024-11-20 16:28:24.550207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.410 [2024-11-20 16:28:24.550618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.410 [2024-11-20 16:28:24.550634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.410 [2024-11-20 16:28:24.550641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.410 [2024-11-20 16:28:24.550800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.410 [2024-11-20 16:28:24.550960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.410 [2024-11-20 16:28:24.550970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.410 [2024-11-20 16:28:24.550976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.410 [2024-11-20 16:28:24.550982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.410 [2024-11-20 16:28:24.563060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.410 [2024-11-20 16:28:24.563490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.410 [2024-11-20 16:28:24.563507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.410 [2024-11-20 16:28:24.563514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.410 [2024-11-20 16:28:24.563683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.410 [2024-11-20 16:28:24.563852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.410 [2024-11-20 16:28:24.563860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.410 [2024-11-20 16:28:24.563867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.410 [2024-11-20 16:28:24.563873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.410 [2024-11-20 16:28:24.576062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.410 [2024-11-20 16:28:24.576408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.410 [2024-11-20 16:28:24.576425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.410 [2024-11-20 16:28:24.576432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.410 [2024-11-20 16:28:24.576606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.410 [2024-11-20 16:28:24.576780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.410 [2024-11-20 16:28:24.576789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.410 [2024-11-20 16:28:24.576796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.410 [2024-11-20 16:28:24.576803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.410 [2024-11-20 16:28:24.588816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.410 [2024-11-20 16:28:24.589232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.410 [2024-11-20 16:28:24.589248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.410 [2024-11-20 16:28:24.589255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.410 [2024-11-20 16:28:24.589414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.410 [2024-11-20 16:28:24.589574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.410 [2024-11-20 16:28:24.589582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.410 [2024-11-20 16:28:24.589588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.410 [2024-11-20 16:28:24.589597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.410 [2024-11-20 16:28:24.601637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.410 [2024-11-20 16:28:24.602032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.410 [2024-11-20 16:28:24.602076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.410 [2024-11-20 16:28:24.602100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.410 [2024-11-20 16:28:24.602603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.410 [2024-11-20 16:28:24.602773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.410 [2024-11-20 16:28:24.602781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.410 [2024-11-20 16:28:24.602788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.410 [2024-11-20 16:28:24.602794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.410 [2024-11-20 16:28:24.614427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.410 [2024-11-20 16:28:24.614846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.410 [2024-11-20 16:28:24.614862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.410 [2024-11-20 16:28:24.614868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.410 [2024-11-20 16:28:24.615027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.410 [2024-11-20 16:28:24.615187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.410 [2024-11-20 16:28:24.615194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.410 [2024-11-20 16:28:24.615200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.410 [2024-11-20 16:28:24.615212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.410 [2024-11-20 16:28:24.627160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.410 [2024-11-20 16:28:24.627577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.410 [2024-11-20 16:28:24.627594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.410 [2024-11-20 16:28:24.627600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.410 [2024-11-20 16:28:24.627760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.410 [2024-11-20 16:28:24.627920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.410 [2024-11-20 16:28:24.627928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.410 [2024-11-20 16:28:24.627933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.410 [2024-11-20 16:28:24.627939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.670 [2024-11-20 16:28:24.640341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.670 [2024-11-20 16:28:24.640782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-11-20 16:28:24.640803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.670 [2024-11-20 16:28:24.640810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.670 [2024-11-20 16:28:24.640971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.670 [2024-11-20 16:28:24.641131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.670 [2024-11-20 16:28:24.641139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.670 [2024-11-20 16:28:24.641145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.670 [2024-11-20 16:28:24.641151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.670 [2024-11-20 16:28:24.653222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.670 [2024-11-20 16:28:24.653666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-11-20 16:28:24.653715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.670 [2024-11-20 16:28:24.653741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.670 [2024-11-20 16:28:24.654342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.670 [2024-11-20 16:28:24.654849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.670 [2024-11-20 16:28:24.654857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.670 [2024-11-20 16:28:24.654863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.670 [2024-11-20 16:28:24.654870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.670 [2024-11-20 16:28:24.666060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.670 [2024-11-20 16:28:24.666516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-11-20 16:28:24.666563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.670 [2024-11-20 16:28:24.666587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.670 [2024-11-20 16:28:24.666979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.670 [2024-11-20 16:28:24.667149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.670 [2024-11-20 16:28:24.667157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.670 [2024-11-20 16:28:24.667163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.670 [2024-11-20 16:28:24.667170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.670 [2024-11-20 16:28:24.678827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.670 [2024-11-20 16:28:24.679274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-11-20 16:28:24.679321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.670 [2024-11-20 16:28:24.679345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.670 [2024-11-20 16:28:24.679782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.670 [2024-11-20 16:28:24.679943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.670 [2024-11-20 16:28:24.679951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.670 [2024-11-20 16:28:24.679956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.670 [2024-11-20 16:28:24.679962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.670 [2024-11-20 16:28:24.691696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.670 [2024-11-20 16:28:24.691998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-11-20 16:28:24.692015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.670 [2024-11-20 16:28:24.692022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.670 [2024-11-20 16:28:24.692191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.671 [2024-11-20 16:28:24.692365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.671 [2024-11-20 16:28:24.692374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.671 [2024-11-20 16:28:24.692380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.671 [2024-11-20 16:28:24.692387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.671 [2024-11-20 16:28:24.704549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.671 [2024-11-20 16:28:24.704992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-11-20 16:28:24.705036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-11-20 16:28:24.705060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.671 [2024-11-20 16:28:24.705657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.671 [2024-11-20 16:28:24.706070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.671 [2024-11-20 16:28:24.706078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.671 [2024-11-20 16:28:24.706084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.671 [2024-11-20 16:28:24.706091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.671 [2024-11-20 16:28:24.717331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.671 [2024-11-20 16:28:24.717769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-11-20 16:28:24.717807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-11-20 16:28:24.717833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.671 [2024-11-20 16:28:24.718432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.671 [2024-11-20 16:28:24.718894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.671 [2024-11-20 16:28:24.718905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.671 [2024-11-20 16:28:24.718911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.671 [2024-11-20 16:28:24.718918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.671 [2024-11-20 16:28:24.730101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.671 [2024-11-20 16:28:24.730491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-11-20 16:28:24.730539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-11-20 16:28:24.730565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.671 [2024-11-20 16:28:24.731052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.671 [2024-11-20 16:28:24.731226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.671 [2024-11-20 16:28:24.731235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.671 [2024-11-20 16:28:24.731242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.671 [2024-11-20 16:28:24.731248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.671 [2024-11-20 16:28:24.742887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.671 [2024-11-20 16:28:24.743257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-11-20 16:28:24.743317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-11-20 16:28:24.743341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.671 [2024-11-20 16:28:24.743909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.671 [2024-11-20 16:28:24.744069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.671 [2024-11-20 16:28:24.744078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.671 [2024-11-20 16:28:24.744084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.671 [2024-11-20 16:28:24.744090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.671 [2024-11-20 16:28:24.755767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.671 [2024-11-20 16:28:24.756193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-11-20 16:28:24.756215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-11-20 16:28:24.756223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.671 [2024-11-20 16:28:24.756392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.671 [2024-11-20 16:28:24.756561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.671 [2024-11-20 16:28:24.756570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.671 [2024-11-20 16:28:24.756577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.671 [2024-11-20 16:28:24.756584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.671 [2024-11-20 16:28:24.768625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.671 [2024-11-20 16:28:24.768996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-11-20 16:28:24.769040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-11-20 16:28:24.769063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.671 [2024-11-20 16:28:24.769658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.671 [2024-11-20 16:28:24.769842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.671 [2024-11-20 16:28:24.769850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.671 [2024-11-20 16:28:24.769856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.671 [2024-11-20 16:28:24.769862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.671 [2024-11-20 16:28:24.781372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.671 [2024-11-20 16:28:24.781717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-11-20 16:28:24.781761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-11-20 16:28:24.781785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.671 [2024-11-20 16:28:24.782381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.671 [2024-11-20 16:28:24.782917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.671 [2024-11-20 16:28:24.782924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.671 [2024-11-20 16:28:24.782930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.671 [2024-11-20 16:28:24.782936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.671 [2024-11-20 16:28:24.794103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.671 [2024-11-20 16:28:24.794541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-11-20 16:28:24.794558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-11-20 16:28:24.794565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.671 [2024-11-20 16:28:24.794734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.671 [2024-11-20 16:28:24.794904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.671 [2024-11-20 16:28:24.794912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.671 [2024-11-20 16:28:24.794918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.671 [2024-11-20 16:28:24.794925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.671 [2024-11-20 16:28:24.806954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.671 [2024-11-20 16:28:24.807318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-11-20 16:28:24.807338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-11-20 16:28:24.807345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.671 [2024-11-20 16:28:24.807515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.671 [2024-11-20 16:28:24.807674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.671 [2024-11-20 16:28:24.807681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.671 [2024-11-20 16:28:24.807686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.671 [2024-11-20 16:28:24.807692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.671 [2024-11-20 16:28:24.820089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.671 [2024-11-20 16:28:24.820518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-11-20 16:28:24.820535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-11-20 16:28:24.820542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.672 [2024-11-20 16:28:24.820711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.672 [2024-11-20 16:28:24.820882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.672 [2024-11-20 16:28:24.820890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.672 [2024-11-20 16:28:24.820897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.672 [2024-11-20 16:28:24.820904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.672 [2024-11-20 16:28:24.833098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.672 [2024-11-20 16:28:24.833461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-11-20 16:28:24.833479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-11-20 16:28:24.833487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.672 [2024-11-20 16:28:24.833661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.672 [2024-11-20 16:28:24.833833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.672 [2024-11-20 16:28:24.833842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.672 [2024-11-20 16:28:24.833849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.672 [2024-11-20 16:28:24.833857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.672 [2024-11-20 16:28:24.845860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.672 [2024-11-20 16:28:24.846224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-11-20 16:28:24.846241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-11-20 16:28:24.846248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.672 [2024-11-20 16:28:24.846422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.672 [2024-11-20 16:28:24.846591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.672 [2024-11-20 16:28:24.846599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.672 [2024-11-20 16:28:24.846606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.672 [2024-11-20 16:28:24.846612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.672 [2024-11-20 16:28:24.858617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.672 [2024-11-20 16:28:24.859040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-11-20 16:28:24.859056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-11-20 16:28:24.859063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.672 [2024-11-20 16:28:24.859239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.672 [2024-11-20 16:28:24.859408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.672 [2024-11-20 16:28:24.859416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.672 [2024-11-20 16:28:24.859422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.672 [2024-11-20 16:28:24.859429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.672 [2024-11-20 16:28:24.871585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.672 [2024-11-20 16:28:24.872024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-11-20 16:28:24.872068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-11-20 16:28:24.872091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.672 [2024-11-20 16:28:24.872690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.672 [2024-11-20 16:28:24.873232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.672 [2024-11-20 16:28:24.873241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.672 [2024-11-20 16:28:24.873247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.672 [2024-11-20 16:28:24.873254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.672 [2024-11-20 16:28:24.884445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.672 [2024-11-20 16:28:24.884813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-11-20 16:28:24.884829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-11-20 16:28:24.884836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.672 [2024-11-20 16:28:24.885005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.672 [2024-11-20 16:28:24.885175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.672 [2024-11-20 16:28:24.885183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.672 [2024-11-20 16:28:24.885192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.672 [2024-11-20 16:28:24.885199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.672 [2024-11-20 16:28:24.897474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.672 [2024-11-20 16:28:24.897838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-11-20 16:28:24.897856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-11-20 16:28:24.897864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.672 [2024-11-20 16:28:24.898050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.672 [2024-11-20 16:28:24.898244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.672 [2024-11-20 16:28:24.898253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.672 [2024-11-20 16:28:24.898260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.672 [2024-11-20 16:28:24.898268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.932 [2024-11-20 16:28:24.910374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.932 [2024-11-20 16:28:24.910767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.932 [2024-11-20 16:28:24.910785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.932 [2024-11-20 16:28:24.910793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.932 [2024-11-20 16:28:24.910967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.932 [2024-11-20 16:28:24.911141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.932 [2024-11-20 16:28:24.911150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.932 [2024-11-20 16:28:24.911156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.932 [2024-11-20 16:28:24.911163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.932 [2024-11-20 16:28:24.923182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.932 [2024-11-20 16:28:24.923604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.932 [2024-11-20 16:28:24.923621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.932 [2024-11-20 16:28:24.923628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.932 [2024-11-20 16:28:24.923787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.932 [2024-11-20 16:28:24.923947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.932 [2024-11-20 16:28:24.923955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.932 [2024-11-20 16:28:24.923961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.932 [2024-11-20 16:28:24.923966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.932 [2024-11-20 16:28:24.935965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.932 [2024-11-20 16:28:24.936356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.932 [2024-11-20 16:28:24.936373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.932 [2024-11-20 16:28:24.936380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.932 [2024-11-20 16:28:24.936540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.932 [2024-11-20 16:28:24.936700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.932 [2024-11-20 16:28:24.936708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.932 [2024-11-20 16:28:24.936714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.932 [2024-11-20 16:28:24.936719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.932 [2024-11-20 16:28:24.948713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.932 [2024-11-20 16:28:24.949144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.932 [2024-11-20 16:28:24.949189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.932 [2024-11-20 16:28:24.949229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.932 [2024-11-20 16:28:24.949814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.932 [2024-11-20 16:28:24.950409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.932 [2024-11-20 16:28:24.950435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.932 [2024-11-20 16:28:24.950457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.932 [2024-11-20 16:28:24.950485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.932 [2024-11-20 16:28:24.963833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.932 [2024-11-20 16:28:24.964273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.932 [2024-11-20 16:28:24.964295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.932 [2024-11-20 16:28:24.964306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.932 [2024-11-20 16:28:24.964562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.932 [2024-11-20 16:28:24.964817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.932 [2024-11-20 16:28:24.964829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.932 [2024-11-20 16:28:24.964838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.932 [2024-11-20 16:28:24.964848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.932 [2024-11-20 16:28:24.976782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.932 [2024-11-20 16:28:24.977221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.932 [2024-11-20 16:28:24.977241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.932 [2024-11-20 16:28:24.977248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.932 [2024-11-20 16:28:24.977416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.932 [2024-11-20 16:28:24.977585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.932 [2024-11-20 16:28:24.977593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.932 [2024-11-20 16:28:24.977599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.932 [2024-11-20 16:28:24.977605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.932 [2024-11-20 16:28:24.989653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.932 [2024-11-20 16:28:24.990076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.932 [2024-11-20 16:28:24.990120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.932 [2024-11-20 16:28:24.990143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.932 [2024-11-20 16:28:24.990626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.932 [2024-11-20 16:28:24.990796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.932 [2024-11-20 16:28:24.990804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.932 [2024-11-20 16:28:24.990810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.932 [2024-11-20 16:28:24.990816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.932 [2024-11-20 16:28:25.002449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.932 [2024-11-20 16:28:25.002859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.932 [2024-11-20 16:28:25.002875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.932 [2024-11-20 16:28:25.002882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.932 [2024-11-20 16:28:25.003040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.932 [2024-11-20 16:28:25.003200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.932 [2024-11-20 16:28:25.003214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.932 [2024-11-20 16:28:25.003220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.932 [2024-11-20 16:28:25.003226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.932 [2024-11-20 16:28:25.015206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.932 [2024-11-20 16:28:25.015635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.932 [2024-11-20 16:28:25.015679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.932 [2024-11-20 16:28:25.015702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.932 [2024-11-20 16:28:25.016301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.932 [2024-11-20 16:28:25.016685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.932 [2024-11-20 16:28:25.016693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.932 [2024-11-20 16:28:25.016699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.932 [2024-11-20 16:28:25.016705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.932 [2024-11-20 16:28:25.027952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.932 [2024-11-20 16:28:25.028383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.932 [2024-11-20 16:28:25.028430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-11-20 16:28:25.028454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.933 [2024-11-20 16:28:25.028645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.933 [2024-11-20 16:28:25.028805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.933 [2024-11-20 16:28:25.028813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.933 [2024-11-20 16:28:25.028819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.933 [2024-11-20 16:28:25.028825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.933 [2024-11-20 16:28:25.040753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.933 [2024-11-20 16:28:25.041078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-11-20 16:28:25.041094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-11-20 16:28:25.041101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.933 [2024-11-20 16:28:25.041284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.933 [2024-11-20 16:28:25.041454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.933 [2024-11-20 16:28:25.041462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.933 [2024-11-20 16:28:25.041468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.933 [2024-11-20 16:28:25.041475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.933 [2024-11-20 16:28:25.053593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.933 [2024-11-20 16:28:25.054033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-11-20 16:28:25.054052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-11-20 16:28:25.054059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.933 [2024-11-20 16:28:25.054235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.933 [2024-11-20 16:28:25.054403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.933 [2024-11-20 16:28:25.054412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.933 [2024-11-20 16:28:25.054421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.933 [2024-11-20 16:28:25.054428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.933 [2024-11-20 16:28:25.066609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.933 [2024-11-20 16:28:25.066980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-11-20 16:28:25.066996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-11-20 16:28:25.067003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.933 [2024-11-20 16:28:25.067163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.933 [2024-11-20 16:28:25.067347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.933 [2024-11-20 16:28:25.067356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.933 [2024-11-20 16:28:25.067363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.933 [2024-11-20 16:28:25.067369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.933 [2024-11-20 16:28:25.079543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.933 [2024-11-20 16:28:25.079913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-11-20 16:28:25.079930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-11-20 16:28:25.079938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.933 [2024-11-20 16:28:25.080106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.933 [2024-11-20 16:28:25.080281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.933 [2024-11-20 16:28:25.080290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.933 [2024-11-20 16:28:25.080296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.933 [2024-11-20 16:28:25.080302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.933 [2024-11-20 16:28:25.092656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.933 [2024-11-20 16:28:25.092957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-11-20 16:28:25.092974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-11-20 16:28:25.092982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.933 [2024-11-20 16:28:25.093156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.933 [2024-11-20 16:28:25.093335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.933 [2024-11-20 16:28:25.093344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.933 [2024-11-20 16:28:25.093350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.933 [2024-11-20 16:28:25.093357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.933 [2024-11-20 16:28:25.105450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.933 [2024-11-20 16:28:25.105864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-11-20 16:28:25.105881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-11-20 16:28:25.105888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.933 [2024-11-20 16:28:25.106057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.933 [2024-11-20 16:28:25.106232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.933 [2024-11-20 16:28:25.106240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.933 [2024-11-20 16:28:25.106247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.933 [2024-11-20 16:28:25.106253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.933 [2024-11-20 16:28:25.118321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.933 [2024-11-20 16:28:25.118736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-11-20 16:28:25.118753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-11-20 16:28:25.118760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.933 [2024-11-20 16:28:25.118928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.933 [2024-11-20 16:28:25.119100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.933 [2024-11-20 16:28:25.119109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.933 [2024-11-20 16:28:25.119115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.933 [2024-11-20 16:28:25.119121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.933 [2024-11-20 16:28:25.131191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.933 [2024-11-20 16:28:25.131519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-11-20 16:28:25.131535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-11-20 16:28:25.131541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.933 [2024-11-20 16:28:25.131701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.933 [2024-11-20 16:28:25.131860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.933 [2024-11-20 16:28:25.131868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.933 [2024-11-20 16:28:25.131874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.933 [2024-11-20 16:28:25.131880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.933 [2024-11-20 16:28:25.144018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.933 [2024-11-20 16:28:25.144429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-11-20 16:28:25.144446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-11-20 16:28:25.144456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.933 [2024-11-20 16:28:25.144625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.933 [2024-11-20 16:28:25.144793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.933 [2024-11-20 16:28:25.144801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.933 [2024-11-20 16:28:25.144807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.933 [2024-11-20 16:28:25.144814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.933 [2024-11-20 16:28:25.156762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.933 [2024-11-20 16:28:25.157172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-11-20 16:28:25.157189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:53.934 [2024-11-20 16:28:25.157196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:53.934 [2024-11-20 16:28:25.157375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:53.934 [2024-11-20 16:28:25.157548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.934 [2024-11-20 16:28:25.157556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.934 [2024-11-20 16:28:25.157562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.934 [2024-11-20 16:28:25.157568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.192 [2024-11-20 16:28:25.169735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.193 [2024-11-20 16:28:25.170191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-11-20 16:28:25.170257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.193 [2024-11-20 16:28:25.170283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.193 [2024-11-20 16:28:25.170743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.193 [2024-11-20 16:28:25.170913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.193 [2024-11-20 16:28:25.170921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.193 [2024-11-20 16:28:25.170927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.193 [2024-11-20 16:28:25.170934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.193 [2024-11-20 16:28:25.182690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.193 [2024-11-20 16:28:25.183133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-11-20 16:28:25.183179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.193 [2024-11-20 16:28:25.183214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.193 [2024-11-20 16:28:25.183802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.193 [2024-11-20 16:28:25.183975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.193 [2024-11-20 16:28:25.183984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.193 [2024-11-20 16:28:25.183990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.193 [2024-11-20 16:28:25.183997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.193 [2024-11-20 16:28:25.195589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.193 [2024-11-20 16:28:25.196015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-11-20 16:28:25.196060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.193 [2024-11-20 16:28:25.196083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.193 [2024-11-20 16:28:25.196681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.193 [2024-11-20 16:28:25.197157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.193 [2024-11-20 16:28:25.197165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.193 [2024-11-20 16:28:25.197171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.193 [2024-11-20 16:28:25.197177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.193 [2024-11-20 16:28:25.208392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.193 [2024-11-20 16:28:25.208698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-11-20 16:28:25.208714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.193 [2024-11-20 16:28:25.208722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.193 [2024-11-20 16:28:25.208890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.193 [2024-11-20 16:28:25.209059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.193 [2024-11-20 16:28:25.209068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.193 [2024-11-20 16:28:25.209074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.193 [2024-11-20 16:28:25.209080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.193 [2024-11-20 16:28:25.221173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.193 [2024-11-20 16:28:25.221604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-11-20 16:28:25.221650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.193 [2024-11-20 16:28:25.221674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.193 [2024-11-20 16:28:25.222117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.193 [2024-11-20 16:28:25.222294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.193 [2024-11-20 16:28:25.222303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.193 [2024-11-20 16:28:25.222313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.193 [2024-11-20 16:28:25.222319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.193 [2024-11-20 16:28:25.234224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.193 [2024-11-20 16:28:25.234647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-11-20 16:28:25.234691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.193 [2024-11-20 16:28:25.234714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.193 [2024-11-20 16:28:25.235121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.193 [2024-11-20 16:28:25.235302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.193 [2024-11-20 16:28:25.235310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.193 [2024-11-20 16:28:25.235317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.193 [2024-11-20 16:28:25.235323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.193 [2024-11-20 16:28:25.246979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.193 [2024-11-20 16:28:25.247371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-11-20 16:28:25.247389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.193 [2024-11-20 16:28:25.247397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.193 [2024-11-20 16:28:25.247565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.193 [2024-11-20 16:28:25.247737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.193 [2024-11-20 16:28:25.247745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.193 [2024-11-20 16:28:25.247751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.193 [2024-11-20 16:28:25.247758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.193 [2024-11-20 16:28:25.259725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.193 [2024-11-20 16:28:25.260159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-11-20 16:28:25.260176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.193 [2024-11-20 16:28:25.260183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.193 [2024-11-20 16:28:25.260356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.193 [2024-11-20 16:28:25.260525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.193 [2024-11-20 16:28:25.260533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.193 [2024-11-20 16:28:25.260539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.193 [2024-11-20 16:28:25.260545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.193 [2024-11-20 16:28:25.272480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.193 [2024-11-20 16:28:25.272893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-11-20 16:28:25.272910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.193 [2024-11-20 16:28:25.272917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.193 [2024-11-20 16:28:25.273085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.193 [2024-11-20 16:28:25.273263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.193 [2024-11-20 16:28:25.273272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.193 [2024-11-20 16:28:25.273278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.193 [2024-11-20 16:28:25.273285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.193 7164.25 IOPS, 27.99 MiB/s [2024-11-20T15:28:25.427Z] [2024-11-20 16:28:25.285364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.193 [2024-11-20 16:28:25.285705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-11-20 16:28:25.285721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.193 [2024-11-20 16:28:25.285729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.193 [2024-11-20 16:28:25.285898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.193 [2024-11-20 16:28:25.286067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.193 [2024-11-20 16:28:25.286075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.194 [2024-11-20 16:28:25.286081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.194 [2024-11-20 16:28:25.286088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.194 [2024-11-20 16:28:25.298205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.194 [2024-11-20 16:28:25.298548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-11-20 16:28:25.298565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.194 [2024-11-20 16:28:25.298572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.194 [2024-11-20 16:28:25.298740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.194 [2024-11-20 16:28:25.298912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.194 [2024-11-20 16:28:25.298921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.194 [2024-11-20 16:28:25.298927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.194 [2024-11-20 16:28:25.298934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.194 [2024-11-20 16:28:25.311183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.194 [2024-11-20 16:28:25.311552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-11-20 16:28:25.311569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.194 [2024-11-20 16:28:25.311580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.194 [2024-11-20 16:28:25.311748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.194 [2024-11-20 16:28:25.311916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.194 [2024-11-20 16:28:25.311924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.194 [2024-11-20 16:28:25.311931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.194 [2024-11-20 16:28:25.311937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.194 [2024-11-20 16:28:25.324037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.194 [2024-11-20 16:28:25.324342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-11-20 16:28:25.324358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.194 [2024-11-20 16:28:25.324365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.194 [2024-11-20 16:28:25.324533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.194 [2024-11-20 16:28:25.324702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.194 [2024-11-20 16:28:25.324710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.194 [2024-11-20 16:28:25.324717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.194 [2024-11-20 16:28:25.324723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.194 [2024-11-20 16:28:25.336842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.194 [2024-11-20 16:28:25.337325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-11-20 16:28:25.337342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.194 [2024-11-20 16:28:25.337350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.194 [2024-11-20 16:28:25.337518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.194 [2024-11-20 16:28:25.337687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.194 [2024-11-20 16:28:25.337695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.194 [2024-11-20 16:28:25.337701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.194 [2024-11-20 16:28:25.337708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.194 [2024-11-20 16:28:25.349946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.194 [2024-11-20 16:28:25.350349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-11-20 16:28:25.350367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.194 [2024-11-20 16:28:25.350375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.194 [2024-11-20 16:28:25.350549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.194 [2024-11-20 16:28:25.350726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.194 [2024-11-20 16:28:25.350735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.194 [2024-11-20 16:28:25.350742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.194 [2024-11-20 16:28:25.350748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.194 [2024-11-20 16:28:25.362792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.194 [2024-11-20 16:28:25.363228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-11-20 16:28:25.363245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.194 [2024-11-20 16:28:25.363252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.194 [2024-11-20 16:28:25.363428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.194 [2024-11-20 16:28:25.363588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.194 [2024-11-20 16:28:25.363595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.194 [2024-11-20 16:28:25.363601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.194 [2024-11-20 16:28:25.363607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.194 [2024-11-20 16:28:25.375636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.194 [2024-11-20 16:28:25.376086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-11-20 16:28:25.376131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.194 [2024-11-20 16:28:25.376155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.194 [2024-11-20 16:28:25.376630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.194 [2024-11-20 16:28:25.376800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.194 [2024-11-20 16:28:25.376808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.194 [2024-11-20 16:28:25.376815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.194 [2024-11-20 16:28:25.376821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.194 [2024-11-20 16:28:25.388469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.194 [2024-11-20 16:28:25.388910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-11-20 16:28:25.388926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.194 [2024-11-20 16:28:25.388933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.194 [2024-11-20 16:28:25.389091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.194 [2024-11-20 16:28:25.389275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.194 [2024-11-20 16:28:25.389284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.194 [2024-11-20 16:28:25.389293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.194 [2024-11-20 16:28:25.389300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.194 [2024-11-20 16:28:25.401285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.194 [2024-11-20 16:28:25.401668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-11-20 16:28:25.401685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.194 [2024-11-20 16:28:25.401692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.194 [2024-11-20 16:28:25.401862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.194 [2024-11-20 16:28:25.402031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.194 [2024-11-20 16:28:25.402039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.194 [2024-11-20 16:28:25.402046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.194 [2024-11-20 16:28:25.402053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.194 [2024-11-20 16:28:25.414238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.194 [2024-11-20 16:28:25.414587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-11-20 16:28:25.414604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.194 [2024-11-20 16:28:25.414611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.195 [2024-11-20 16:28:25.414779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.195 [2024-11-20 16:28:25.414948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.195 [2024-11-20 16:28:25.414956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.195 [2024-11-20 16:28:25.414962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.195 [2024-11-20 16:28:25.414969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.454 [2024-11-20 16:28:25.427131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.454 [2024-11-20 16:28:25.427506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-11-20 16:28:25.427525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.454 [2024-11-20 16:28:25.427533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.454 [2024-11-20 16:28:25.427707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.454 [2024-11-20 16:28:25.427880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.454 [2024-11-20 16:28:25.427889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.454 [2024-11-20 16:28:25.427896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.454 [2024-11-20 16:28:25.427905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.454 [2024-11-20 16:28:25.440025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.454 [2024-11-20 16:28:25.440393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-11-20 16:28:25.440411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.454 [2024-11-20 16:28:25.440418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.454 [2024-11-20 16:28:25.440587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.454 [2024-11-20 16:28:25.440757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.454 [2024-11-20 16:28:25.440765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.454 [2024-11-20 16:28:25.440772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.454 [2024-11-20 16:28:25.440779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.454 [2024-11-20 16:28:25.452956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.454 [2024-11-20 16:28:25.453336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-11-20 16:28:25.453383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.454 [2024-11-20 16:28:25.453407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.454 [2024-11-20 16:28:25.453992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.454 [2024-11-20 16:28:25.454590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.454 [2024-11-20 16:28:25.454613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.454 [2024-11-20 16:28:25.454620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.454 [2024-11-20 16:28:25.454627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.454 [2024-11-20 16:28:25.465836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.454 [2024-11-20 16:28:25.466262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-11-20 16:28:25.466280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.454 [2024-11-20 16:28:25.466287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.454 [2024-11-20 16:28:25.466455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.454 [2024-11-20 16:28:25.466624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.454 [2024-11-20 16:28:25.466632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.454 [2024-11-20 16:28:25.466638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.454 [2024-11-20 16:28:25.466644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.454 [2024-11-20 16:28:25.478609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.454 [2024-11-20 16:28:25.479050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-11-20 16:28:25.479095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.454 [2024-11-20 16:28:25.479127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.454 [2024-11-20 16:28:25.479647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.454 [2024-11-20 16:28:25.479818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.454 [2024-11-20 16:28:25.479826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.454 [2024-11-20 16:28:25.479832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.454 [2024-11-20 16:28:25.479839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.454 [2024-11-20 16:28:25.491495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.454 [2024-11-20 16:28:25.491794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-11-20 16:28:25.491811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.455 [2024-11-20 16:28:25.491818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.455 [2024-11-20 16:28:25.491987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.455 [2024-11-20 16:28:25.492156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.455 [2024-11-20 16:28:25.492164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.455 [2024-11-20 16:28:25.492170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.455 [2024-11-20 16:28:25.492176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.455 [2024-11-20 16:28:25.504277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.455 [2024-11-20 16:28:25.504570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-11-20 16:28:25.504587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.455 [2024-11-20 16:28:25.504594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.455 [2024-11-20 16:28:25.504762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.455 [2024-11-20 16:28:25.504931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.455 [2024-11-20 16:28:25.504939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.455 [2024-11-20 16:28:25.504946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.455 [2024-11-20 16:28:25.504952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.455 [2024-11-20 16:28:25.517065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.455 [2024-11-20 16:28:25.517370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-11-20 16:28:25.517386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.455 [2024-11-20 16:28:25.517393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.455 [2024-11-20 16:28:25.517561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.455 [2024-11-20 16:28:25.517734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.455 [2024-11-20 16:28:25.517743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.455 [2024-11-20 16:28:25.517749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.455 [2024-11-20 16:28:25.517755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.455 [2024-11-20 16:28:25.529877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.455 [2024-11-20 16:28:25.530295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-11-20 16:28:25.530312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.455 [2024-11-20 16:28:25.530320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.455 [2024-11-20 16:28:25.530487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.455 [2024-11-20 16:28:25.530656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.455 [2024-11-20 16:28:25.530665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.455 [2024-11-20 16:28:25.530671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.455 [2024-11-20 16:28:25.530677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.455 [2024-11-20 16:28:25.542654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.455 [2024-11-20 16:28:25.543073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-11-20 16:28:25.543089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.455 [2024-11-20 16:28:25.543096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.455 [2024-11-20 16:28:25.543272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.455 [2024-11-20 16:28:25.543441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.455 [2024-11-20 16:28:25.543449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.455 [2024-11-20 16:28:25.543455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.455 [2024-11-20 16:28:25.543461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.455 [2024-11-20 16:28:25.555413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.455 [2024-11-20 16:28:25.555766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-11-20 16:28:25.555783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.455 [2024-11-20 16:28:25.555790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.455 [2024-11-20 16:28:25.555958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.455 [2024-11-20 16:28:25.556127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.455 [2024-11-20 16:28:25.556136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.455 [2024-11-20 16:28:25.556147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.455 [2024-11-20 16:28:25.556154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.455 [2024-11-20 16:28:25.568277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.455 [2024-11-20 16:28:25.568625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-11-20 16:28:25.568642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.455 [2024-11-20 16:28:25.568649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.455 [2024-11-20 16:28:25.568817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.455 [2024-11-20 16:28:25.568987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.455 [2024-11-20 16:28:25.568995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.455 [2024-11-20 16:28:25.569001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.455 [2024-11-20 16:28:25.569007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.455 [2024-11-20 16:28:25.581122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.455 [2024-11-20 16:28:25.581471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-11-20 16:28:25.581488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.455 [2024-11-20 16:28:25.581495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.455 [2024-11-20 16:28:25.581664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.455 [2024-11-20 16:28:25.581832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.455 [2024-11-20 16:28:25.581840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.455 [2024-11-20 16:28:25.581846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.455 [2024-11-20 16:28:25.581853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.455 [2024-11-20 16:28:25.593978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.455 [2024-11-20 16:28:25.594467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-11-20 16:28:25.594485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.455 [2024-11-20 16:28:25.594492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.455 [2024-11-20 16:28:25.594651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.455 [2024-11-20 16:28:25.594809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.455 [2024-11-20 16:28:25.594817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.455 [2024-11-20 16:28:25.594824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.455 [2024-11-20 16:28:25.594831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.455 [2024-11-20 16:28:25.607045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.455 [2024-11-20 16:28:25.607488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-11-20 16:28:25.607521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.455 [2024-11-20 16:28:25.607529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.455 [2024-11-20 16:28:25.607702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.455 [2024-11-20 16:28:25.607877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.455 [2024-11-20 16:28:25.607885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.456 [2024-11-20 16:28:25.607891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.456 [2024-11-20 16:28:25.607898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.456 [2024-11-20 16:28:25.619815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.456 [2024-11-20 16:28:25.620234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-11-20 16:28:25.620278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.456 [2024-11-20 16:28:25.620302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.456 [2024-11-20 16:28:25.620885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.456 [2024-11-20 16:28:25.621326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.456 [2024-11-20 16:28:25.621334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.456 [2024-11-20 16:28:25.621340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.456 [2024-11-20 16:28:25.621346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.456 [2024-11-20 16:28:25.632673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.456 [2024-11-20 16:28:25.633090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-11-20 16:28:25.633107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.456 [2024-11-20 16:28:25.633114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.456 [2024-11-20 16:28:25.633288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.456 [2024-11-20 16:28:25.633457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.456 [2024-11-20 16:28:25.633465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.456 [2024-11-20 16:28:25.633472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.456 [2024-11-20 16:28:25.633478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.456 [2024-11-20 16:28:25.645404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.456 [2024-11-20 16:28:25.645798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-11-20 16:28:25.645814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.456 [2024-11-20 16:28:25.645824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.456 [2024-11-20 16:28:25.645984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.456 [2024-11-20 16:28:25.646148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.456 [2024-11-20 16:28:25.646156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.456 [2024-11-20 16:28:25.646162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.456 [2024-11-20 16:28:25.646168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.456 [2024-11-20 16:28:25.658183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.456 [2024-11-20 16:28:25.658613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-11-20 16:28:25.658658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.456 [2024-11-20 16:28:25.658681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.456 [2024-11-20 16:28:25.659207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.456 [2024-11-20 16:28:25.659377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.456 [2024-11-20 16:28:25.659386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.456 [2024-11-20 16:28:25.659392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.456 [2024-11-20 16:28:25.659398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.456 [2024-11-20 16:28:25.671026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.456 [2024-11-20 16:28:25.671451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-11-20 16:28:25.671496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.456 [2024-11-20 16:28:25.671520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.456 [2024-11-20 16:28:25.672101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.456 [2024-11-20 16:28:25.672298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.456 [2024-11-20 16:28:25.672307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.456 [2024-11-20 16:28:25.672313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.456 [2024-11-20 16:28:25.672320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.716 [2024-11-20 16:28:25.684092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.716 [2024-11-20 16:28:25.684529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.716 [2024-11-20 16:28:25.684547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.716 [2024-11-20 16:28:25.684555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.716 [2024-11-20 16:28:25.684744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.716 [2024-11-20 16:28:25.684922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.716 [2024-11-20 16:28:25.684930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.716 [2024-11-20 16:28:25.684937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.716 [2024-11-20 16:28:25.684944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.716 [2024-11-20 16:28:25.696920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.716 [2024-11-20 16:28:25.697302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.716 [2024-11-20 16:28:25.697320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.716 [2024-11-20 16:28:25.697328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.716 [2024-11-20 16:28:25.697498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.716 [2024-11-20 16:28:25.697667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.716 [2024-11-20 16:28:25.697677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.716 [2024-11-20 16:28:25.697683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.716 [2024-11-20 16:28:25.697689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.716 [2024-11-20 16:28:25.709848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.716 [2024-11-20 16:28:25.710310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.716 [2024-11-20 16:28:25.710357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.716 [2024-11-20 16:28:25.710380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.716 [2024-11-20 16:28:25.710965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.716 [2024-11-20 16:28:25.711196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.716 [2024-11-20 16:28:25.711212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.716 [2024-11-20 16:28:25.711219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.716 [2024-11-20 16:28:25.711225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.716 [2024-11-20 16:28:25.722717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.716 [2024-11-20 16:28:25.723113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.716 [2024-11-20 16:28:25.723130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.716 [2024-11-20 16:28:25.723137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.716 [2024-11-20 16:28:25.723321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.716 [2024-11-20 16:28:25.723495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.716 [2024-11-20 16:28:25.723503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.716 [2024-11-20 16:28:25.723509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.716 [2024-11-20 16:28:25.723519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.716 [2024-11-20 16:28:25.735702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.716 [2024-11-20 16:28:25.736111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.716 [2024-11-20 16:28:25.736157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.716 [2024-11-20 16:28:25.736180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.716 [2024-11-20 16:28:25.736777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.716 [2024-11-20 16:28:25.737382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.716 [2024-11-20 16:28:25.737390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.716 [2024-11-20 16:28:25.737397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.716 [2024-11-20 16:28:25.737403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.716 [2024-11-20 16:28:25.748577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.716 [2024-11-20 16:28:25.748991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.716 [2024-11-20 16:28:25.749008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.716 [2024-11-20 16:28:25.749015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.716 [2024-11-20 16:28:25.749184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.716 [2024-11-20 16:28:25.749362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.716 [2024-11-20 16:28:25.749371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.716 [2024-11-20 16:28:25.749377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.716 [2024-11-20 16:28:25.749383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.716 [2024-11-20 16:28:25.761400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.716 [2024-11-20 16:28:25.761727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.716 [2024-11-20 16:28:25.761743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.716 [2024-11-20 16:28:25.761750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.716 [2024-11-20 16:28:25.761910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.716 [2024-11-20 16:28:25.762070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.716 [2024-11-20 16:28:25.762078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.716 [2024-11-20 16:28:25.762084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.716 [2024-11-20 16:28:25.762090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.716 [2024-11-20 16:28:25.774167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.716 [2024-11-20 16:28:25.774520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.716 [2024-11-20 16:28:25.774537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.716 [2024-11-20 16:28:25.774544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.716 [2024-11-20 16:28:25.774713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.716 [2024-11-20 16:28:25.774881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.716 [2024-11-20 16:28:25.774889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.716 [2024-11-20 16:28:25.774895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.716 [2024-11-20 16:28:25.774902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.716 [2024-11-20 16:28:25.786980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.716 [2024-11-20 16:28:25.787394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.716 [2024-11-20 16:28:25.787439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.716 [2024-11-20 16:28:25.787462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.716 [2024-11-20 16:28:25.788045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.716 [2024-11-20 16:28:25.788569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.716 [2024-11-20 16:28:25.788578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.716 [2024-11-20 16:28:25.788584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.717 [2024-11-20 16:28:25.788590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.717 [2024-11-20 16:28:25.799789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.717 [2024-11-20 16:28:25.800225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.717 [2024-11-20 16:28:25.800243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.717 [2024-11-20 16:28:25.800250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.717 [2024-11-20 16:28:25.800419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.717 [2024-11-20 16:28:25.800589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.717 [2024-11-20 16:28:25.800597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.717 [2024-11-20 16:28:25.800603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.717 [2024-11-20 16:28:25.800609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.717 [2024-11-20 16:28:25.812655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.717 [2024-11-20 16:28:25.813099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.717 [2024-11-20 16:28:25.813115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.717 [2024-11-20 16:28:25.813123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.717 [2024-11-20 16:28:25.813303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.717 [2024-11-20 16:28:25.813472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.717 [2024-11-20 16:28:25.813481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.717 [2024-11-20 16:28:25.813487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.717 [2024-11-20 16:28:25.813493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.717 [2024-11-20 16:28:25.825705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.717 [2024-11-20 16:28:25.826068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.717 [2024-11-20 16:28:25.826114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.717 [2024-11-20 16:28:25.826139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.717 [2024-11-20 16:28:25.826671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.717 [2024-11-20 16:28:25.826841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.717 [2024-11-20 16:28:25.826849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.717 [2024-11-20 16:28:25.826856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.717 [2024-11-20 16:28:25.826863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.717 [2024-11-20 16:28:25.838511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.717 [2024-11-20 16:28:25.838874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.717 [2024-11-20 16:28:25.838890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.717 [2024-11-20 16:28:25.838897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.717 [2024-11-20 16:28:25.839066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.717 [2024-11-20 16:28:25.839325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.717 [2024-11-20 16:28:25.839336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.717 [2024-11-20 16:28:25.839342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.717 [2024-11-20 16:28:25.839348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.717 [2024-11-20 16:28:25.851465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.717 [2024-11-20 16:28:25.851840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.717 [2024-11-20 16:28:25.851857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.717 [2024-11-20 16:28:25.851865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.717 [2024-11-20 16:28:25.852039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.717 [2024-11-20 16:28:25.852217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.717 [2024-11-20 16:28:25.852230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.717 [2024-11-20 16:28:25.852236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.717 [2024-11-20 16:28:25.852243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.717 [2024-11-20 16:28:25.864571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.717 [2024-11-20 16:28:25.865001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.717 [2024-11-20 16:28:25.865018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.717 [2024-11-20 16:28:25.865027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.717 [2024-11-20 16:28:25.865207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.717 [2024-11-20 16:28:25.865381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.717 [2024-11-20 16:28:25.865390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.717 [2024-11-20 16:28:25.865398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.717 [2024-11-20 16:28:25.865405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.717 [2024-11-20 16:28:25.877415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.717 [2024-11-20 16:28:25.877772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.717 [2024-11-20 16:28:25.877788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.717 [2024-11-20 16:28:25.877795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.717 [2024-11-20 16:28:25.877964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.717 [2024-11-20 16:28:25.878138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.717 [2024-11-20 16:28:25.878146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.717 [2024-11-20 16:28:25.878152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.717 [2024-11-20 16:28:25.878158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.717 [2024-11-20 16:28:25.890241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.717 [2024-11-20 16:28:25.890557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.717 [2024-11-20 16:28:25.890573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.717 [2024-11-20 16:28:25.890580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.717 [2024-11-20 16:28:25.890740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.717 [2024-11-20 16:28:25.890899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.717 [2024-11-20 16:28:25.890907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.717 [2024-11-20 16:28:25.890913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.717 [2024-11-20 16:28:25.890923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.717 [2024-11-20 16:28:25.903109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.717 [2024-11-20 16:28:25.903468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.717 [2024-11-20 16:28:25.903514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.717 [2024-11-20 16:28:25.903537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.717 [2024-11-20 16:28:25.904121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.717 [2024-11-20 16:28:25.904717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.717 [2024-11-20 16:28:25.904726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.717 [2024-11-20 16:28:25.904732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.717 [2024-11-20 16:28:25.904738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.717 [2024-11-20 16:28:25.915850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.717 [2024-11-20 16:28:25.916267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.717 [2024-11-20 16:28:25.916284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.718 [2024-11-20 16:28:25.916291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.718 [2024-11-20 16:28:25.916459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.718 [2024-11-20 16:28:25.916629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.718 [2024-11-20 16:28:25.916637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.718 [2024-11-20 16:28:25.916643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.718 [2024-11-20 16:28:25.916650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.718 [2024-11-20 16:28:25.928740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.718 [2024-11-20 16:28:25.929179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.718 [2024-11-20 16:28:25.929234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.718 [2024-11-20 16:28:25.929258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.718 [2024-11-20 16:28:25.929746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.718 [2024-11-20 16:28:25.929906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.718 [2024-11-20 16:28:25.929914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.718 [2024-11-20 16:28:25.929920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.718 [2024-11-20 16:28:25.929926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.718 [2024-11-20 16:28:25.941564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.718 [2024-11-20 16:28:25.941971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.718 [2024-11-20 16:28:25.941988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.718 [2024-11-20 16:28:25.941996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.718 [2024-11-20 16:28:25.942175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.718 [2024-11-20 16:28:25.942368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.718 [2024-11-20 16:28:25.942377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.718 [2024-11-20 16:28:25.942384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.718 [2024-11-20 16:28:25.942390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.977 [2024-11-20 16:28:25.954571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.977 [2024-11-20 16:28:25.954972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.977 [2024-11-20 16:28:25.954990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.977 [2024-11-20 16:28:25.954998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.977 [2024-11-20 16:28:25.955168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.977 [2024-11-20 16:28:25.955345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.977 [2024-11-20 16:28:25.955353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.978 [2024-11-20 16:28:25.955360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.978 [2024-11-20 16:28:25.955366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.978 [2024-11-20 16:28:25.967436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.978 [2024-11-20 16:28:25.967848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.978 [2024-11-20 16:28:25.967896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.978 [2024-11-20 16:28:25.967920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.978 [2024-11-20 16:28:25.968520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.978 [2024-11-20 16:28:25.968979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.978 [2024-11-20 16:28:25.968987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.978 [2024-11-20 16:28:25.968993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.978 [2024-11-20 16:28:25.969000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.978 [2024-11-20 16:28:25.980184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.978 [2024-11-20 16:28:25.980610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.978 [2024-11-20 16:28:25.980627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.978 [2024-11-20 16:28:25.980634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.978 [2024-11-20 16:28:25.980806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.978 [2024-11-20 16:28:25.980975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.978 [2024-11-20 16:28:25.980983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.978 [2024-11-20 16:28:25.980989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.978 [2024-11-20 16:28:25.980996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.978 [2024-11-20 16:28:25.992920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.978 [2024-11-20 16:28:25.993327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.978 [2024-11-20 16:28:25.993344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.978 [2024-11-20 16:28:25.993351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.978 [2024-11-20 16:28:25.993510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.978 [2024-11-20 16:28:25.993669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.978 [2024-11-20 16:28:25.993677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.978 [2024-11-20 16:28:25.993683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.978 [2024-11-20 16:28:25.993689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.978 [2024-11-20 16:28:26.005753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.978 [2024-11-20 16:28:26.006145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.978 [2024-11-20 16:28:26.006162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.978 [2024-11-20 16:28:26.006169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.978 [2024-11-20 16:28:26.006357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.978 [2024-11-20 16:28:26.006527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.978 [2024-11-20 16:28:26.006535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.978 [2024-11-20 16:28:26.006541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.978 [2024-11-20 16:28:26.006547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.978 [2024-11-20 16:28:26.018559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.978 [2024-11-20 16:28:26.018955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.978 [2024-11-20 16:28:26.018971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.978 [2024-11-20 16:28:26.018978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.978 [2024-11-20 16:28:26.019137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.978 [2024-11-20 16:28:26.019323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.978 [2024-11-20 16:28:26.019335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.978 [2024-11-20 16:28:26.019341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.978 [2024-11-20 16:28:26.019347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.978 [2024-11-20 16:28:26.031426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.978 [2024-11-20 16:28:26.031827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.978 [2024-11-20 16:28:26.031873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.978 [2024-11-20 16:28:26.031896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.978 [2024-11-20 16:28:26.032494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.978 [2024-11-20 16:28:26.033039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.978 [2024-11-20 16:28:26.033047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.978 [2024-11-20 16:28:26.033053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.978 [2024-11-20 16:28:26.033059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.978 [2024-11-20 16:28:26.044243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.978 [2024-11-20 16:28:26.044656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.978 [2024-11-20 16:28:26.044673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.978 [2024-11-20 16:28:26.044680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.978 [2024-11-20 16:28:26.044848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.978 [2024-11-20 16:28:26.045021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.978 [2024-11-20 16:28:26.045030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.978 [2024-11-20 16:28:26.045036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.978 [2024-11-20 16:28:26.045043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.978 [2024-11-20 16:28:26.057069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.978 [2024-11-20 16:28:26.057500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.978 [2024-11-20 16:28:26.057517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.978 [2024-11-20 16:28:26.057524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.978 [2024-11-20 16:28:26.057691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.978 [2024-11-20 16:28:26.057859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.978 [2024-11-20 16:28:26.057867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.978 [2024-11-20 16:28:26.057873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.978 [2024-11-20 16:28:26.057883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.978 [2024-11-20 16:28:26.069849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.978 [2024-11-20 16:28:26.070206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.978 [2024-11-20 16:28:26.070224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.978 [2024-11-20 16:28:26.070247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.978 [2024-11-20 16:28:26.070415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.978 [2024-11-20 16:28:26.070588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.978 [2024-11-20 16:28:26.070596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.978 [2024-11-20 16:28:26.070602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.978 [2024-11-20 16:28:26.070609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.978 [2024-11-20 16:28:26.082713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.978 [2024-11-20 16:28:26.083150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.978 [2024-11-20 16:28:26.083166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.978 [2024-11-20 16:28:26.083174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.978 [2024-11-20 16:28:26.083348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.979 [2024-11-20 16:28:26.083517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.979 [2024-11-20 16:28:26.083526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.979 [2024-11-20 16:28:26.083532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.979 [2024-11-20 16:28:26.083538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.979 [2024-11-20 16:28:26.095493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.979 [2024-11-20 16:28:26.095931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.979 [2024-11-20 16:28:26.095949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.979 [2024-11-20 16:28:26.095956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.979 [2024-11-20 16:28:26.096124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.979 [2024-11-20 16:28:26.096299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.979 [2024-11-20 16:28:26.096308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.979 [2024-11-20 16:28:26.096314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.979 [2024-11-20 16:28:26.096320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.979 [2024-11-20 16:28:26.108335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.979 [2024-11-20 16:28:26.108717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.979 [2024-11-20 16:28:26.108738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.979 [2024-11-20 16:28:26.108745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.979 [2024-11-20 16:28:26.108914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.979 [2024-11-20 16:28:26.109082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.979 [2024-11-20 16:28:26.109092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.979 [2024-11-20 16:28:26.109100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.979 [2024-11-20 16:28:26.109107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.979 [2024-11-20 16:28:26.121441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.979 [2024-11-20 16:28:26.121900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.979 [2024-11-20 16:28:26.121945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.979 [2024-11-20 16:28:26.121968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.979 [2024-11-20 16:28:26.122477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.979 [2024-11-20 16:28:26.122653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.979 [2024-11-20 16:28:26.122661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.979 [2024-11-20 16:28:26.122669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.979 [2024-11-20 16:28:26.122675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.979 [2024-11-20 16:28:26.134315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.979 [2024-11-20 16:28:26.134757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.979 [2024-11-20 16:28:26.134801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.979 [2024-11-20 16:28:26.134825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.979 [2024-11-20 16:28:26.135313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.979 [2024-11-20 16:28:26.135482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.979 [2024-11-20 16:28:26.135490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.979 [2024-11-20 16:28:26.135497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.979 [2024-11-20 16:28:26.135503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.979 [2024-11-20 16:28:26.147137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.979 [2024-11-20 16:28:26.147576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.979 [2024-11-20 16:28:26.147593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.979 [2024-11-20 16:28:26.147600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.979 [2024-11-20 16:28:26.147772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.979 [2024-11-20 16:28:26.147942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.979 [2024-11-20 16:28:26.147950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.979 [2024-11-20 16:28:26.147956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.979 [2024-11-20 16:28:26.147962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.979 [2024-11-20 16:28:26.159899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.979 [2024-11-20 16:28:26.160310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.979 [2024-11-20 16:28:26.160326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.979 [2024-11-20 16:28:26.160333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.979 [2024-11-20 16:28:26.160492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.979 [2024-11-20 16:28:26.160651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.979 [2024-11-20 16:28:26.160659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.979 [2024-11-20 16:28:26.160665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.979 [2024-11-20 16:28:26.160671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.979 [2024-11-20 16:28:26.172736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.979 [2024-11-20 16:28:26.173167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.979 [2024-11-20 16:28:26.173224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.979 [2024-11-20 16:28:26.173249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.979 [2024-11-20 16:28:26.173832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.979 [2024-11-20 16:28:26.174252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.979 [2024-11-20 16:28:26.174270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.979 [2024-11-20 16:28:26.174284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.979 [2024-11-20 16:28:26.174298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.979 [2024-11-20 16:28:26.187634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.979 [2024-11-20 16:28:26.188164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.979 [2024-11-20 16:28:26.188221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.979 [2024-11-20 16:28:26.188246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.979 [2024-11-20 16:28:26.188724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.979 [2024-11-20 16:28:26.188979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.979 [2024-11-20 16:28:26.188997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.979 [2024-11-20 16:28:26.189007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.979 [2024-11-20 16:28:26.189016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.979 [2024-11-20 16:28:26.200627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.979 [2024-11-20 16:28:26.201048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-11-20 16:28:26.201065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:54.980 [2024-11-20 16:28:26.201072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:54.980 [2024-11-20 16:28:26.201247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:54.980 [2024-11-20 16:28:26.201415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.980 [2024-11-20 16:28:26.201423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.980 [2024-11-20 16:28:26.201430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.980 [2024-11-20 16:28:26.201435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.239 [2024-11-20 16:28:26.213661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.239 [2024-11-20 16:28:26.214106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.239 [2024-11-20 16:28:26.214123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.239 [2024-11-20 16:28:26.214131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.239 [2024-11-20 16:28:26.214314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.239 [2024-11-20 16:28:26.214488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.240 [2024-11-20 16:28:26.214497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.240 [2024-11-20 16:28:26.214503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.240 [2024-11-20 16:28:26.214510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.240 [2024-11-20 16:28:26.226536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.240 [2024-11-20 16:28:26.226881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.240 [2024-11-20 16:28:26.226897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.240 [2024-11-20 16:28:26.226904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.240 [2024-11-20 16:28:26.227064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.240 [2024-11-20 16:28:26.227231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.240 [2024-11-20 16:28:26.227252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.240 [2024-11-20 16:28:26.227258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.240 [2024-11-20 16:28:26.227264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.240 [2024-11-20 16:28:26.239415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.240 [2024-11-20 16:28:26.239860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.240 [2024-11-20 16:28:26.239876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.240 [2024-11-20 16:28:26.239883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.240 [2024-11-20 16:28:26.240042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.240 [2024-11-20 16:28:26.240208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.240 [2024-11-20 16:28:26.240216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.240 [2024-11-20 16:28:26.240222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.240 [2024-11-20 16:28:26.240245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.240 [2024-11-20 16:28:26.252284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.240 [2024-11-20 16:28:26.252678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.240 [2024-11-20 16:28:26.252724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.240 [2024-11-20 16:28:26.252748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.240 [2024-11-20 16:28:26.253349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.240 [2024-11-20 16:28:26.253554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.240 [2024-11-20 16:28:26.253562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.240 [2024-11-20 16:28:26.253568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.240 [2024-11-20 16:28:26.253576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.240 [2024-11-20 16:28:26.265134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.240 [2024-11-20 16:28:26.265506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.240 [2024-11-20 16:28:26.265523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.240 [2024-11-20 16:28:26.265530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.240 [2024-11-20 16:28:26.265699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.240 [2024-11-20 16:28:26.265867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.240 [2024-11-20 16:28:26.265875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.240 [2024-11-20 16:28:26.265881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.240 [2024-11-20 16:28:26.265887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.240 [2024-11-20 16:28:26.277970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.240 [2024-11-20 16:28:26.278301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.240 [2024-11-20 16:28:26.278321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.240 [2024-11-20 16:28:26.278328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.240 [2024-11-20 16:28:26.278488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.240 [2024-11-20 16:28:26.278648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.240 [2024-11-20 16:28:26.278656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.240 [2024-11-20 16:28:26.278661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.240 [2024-11-20 16:28:26.278667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.240 5731.40 IOPS, 22.39 MiB/s [2024-11-20T15:28:26.474Z] [2024-11-20 16:28:26.290702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.240 [2024-11-20 16:28:26.291095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.240 [2024-11-20 16:28:26.291112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.240 [2024-11-20 16:28:26.291119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.240 [2024-11-20 16:28:26.291304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.240 [2024-11-20 16:28:26.291473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.240 [2024-11-20 16:28:26.291481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.240 [2024-11-20 16:28:26.291487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.240 [2024-11-20 16:28:26.291493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.240 [2024-11-20 16:28:26.303447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.240 [2024-11-20 16:28:26.303892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.240 [2024-11-20 16:28:26.303942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.240 [2024-11-20 16:28:26.303966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.240 [2024-11-20 16:28:26.304523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.240 [2024-11-20 16:28:26.304692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.240 [2024-11-20 16:28:26.304700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.240 [2024-11-20 16:28:26.304707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.240 [2024-11-20 16:28:26.304713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.240 [2024-11-20 16:28:26.316208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.240 [2024-11-20 16:28:26.316624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.240 [2024-11-20 16:28:26.316640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.240 [2024-11-20 16:28:26.316647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.240 [2024-11-20 16:28:26.316810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.240 [2024-11-20 16:28:26.316968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.240 [2024-11-20 16:28:26.316976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.240 [2024-11-20 16:28:26.316982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.240 [2024-11-20 16:28:26.316988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.240 [2024-11-20 16:28:26.329008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.240 [2024-11-20 16:28:26.329432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.240 [2024-11-20 16:28:26.329478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.240 [2024-11-20 16:28:26.329502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.240 [2024-11-20 16:28:26.330084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.240 [2024-11-20 16:28:26.330682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.240 [2024-11-20 16:28:26.330691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.240 [2024-11-20 16:28:26.330697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.240 [2024-11-20 16:28:26.330703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.240 [2024-11-20 16:28:26.341891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.240 [2024-11-20 16:28:26.342281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.240 [2024-11-20 16:28:26.342297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.241 [2024-11-20 16:28:26.342304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.241 [2024-11-20 16:28:26.342464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.241 [2024-11-20 16:28:26.342623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.241 [2024-11-20 16:28:26.342631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.241 [2024-11-20 16:28:26.342637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.241 [2024-11-20 16:28:26.342643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.241 [2024-11-20 16:28:26.354646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.241 [2024-11-20 16:28:26.355059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.241 [2024-11-20 16:28:26.355075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.241 [2024-11-20 16:28:26.355081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.241 [2024-11-20 16:28:26.355263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.241 [2024-11-20 16:28:26.355431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.241 [2024-11-20 16:28:26.355442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.241 [2024-11-20 16:28:26.355448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.241 [2024-11-20 16:28:26.355455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.241 [2024-11-20 16:28:26.367592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.241 [2024-11-20 16:28:26.367912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.241 [2024-11-20 16:28:26.367928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.241 [2024-11-20 16:28:26.367935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.241 [2024-11-20 16:28:26.368104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.241 [2024-11-20 16:28:26.368293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.241 [2024-11-20 16:28:26.368304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.241 [2024-11-20 16:28:26.368311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.241 [2024-11-20 16:28:26.368319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.241 [2024-11-20 16:28:26.380730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.241 [2024-11-20 16:28:26.381166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.241 [2024-11-20 16:28:26.381183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.241 [2024-11-20 16:28:26.381191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.241 [2024-11-20 16:28:26.381369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.241 [2024-11-20 16:28:26.381542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.241 [2024-11-20 16:28:26.381550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.241 [2024-11-20 16:28:26.381557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.241 [2024-11-20 16:28:26.381563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.241 [2024-11-20 16:28:26.393598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.241 [2024-11-20 16:28:26.394024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.241 [2024-11-20 16:28:26.394041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.241 [2024-11-20 16:28:26.394048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.241 [2024-11-20 16:28:26.394223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.241 [2024-11-20 16:28:26.394392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.241 [2024-11-20 16:28:26.394401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.241 [2024-11-20 16:28:26.394407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.241 [2024-11-20 16:28:26.394413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.241 [2024-11-20 16:28:26.406436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.241 [2024-11-20 16:28:26.406886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.241 [2024-11-20 16:28:26.406902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.241 [2024-11-20 16:28:26.406909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.241 [2024-11-20 16:28:26.407078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.241 [2024-11-20 16:28:26.407253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.241 [2024-11-20 16:28:26.407261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.241 [2024-11-20 16:28:26.407268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.241 [2024-11-20 16:28:26.407274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.241 [2024-11-20 16:28:26.419189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.241 [2024-11-20 16:28:26.419608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.241 [2024-11-20 16:28:26.419625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.241 [2024-11-20 16:28:26.419632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.241 [2024-11-20 16:28:26.419791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.241 [2024-11-20 16:28:26.419949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.241 [2024-11-20 16:28:26.419957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.241 [2024-11-20 16:28:26.419963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.241 [2024-11-20 16:28:26.419969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.241 [2024-11-20 16:28:26.432041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.241 [2024-11-20 16:28:26.432478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.241 [2024-11-20 16:28:26.432495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.241 [2024-11-20 16:28:26.432502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.241 [2024-11-20 16:28:26.432671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.241 [2024-11-20 16:28:26.432840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.241 [2024-11-20 16:28:26.432848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.241 [2024-11-20 16:28:26.432854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.241 [2024-11-20 16:28:26.432860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.241 [2024-11-20 16:28:26.444874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.241 [2024-11-20 16:28:26.445297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.241 [2024-11-20 16:28:26.445351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.241 [2024-11-20 16:28:26.445374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.241 [2024-11-20 16:28:26.445956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.241 [2024-11-20 16:28:26.446187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.241 [2024-11-20 16:28:26.446195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.241 [2024-11-20 16:28:26.446206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.241 [2024-11-20 16:28:26.446212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.241 [2024-11-20 16:28:26.457731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.241 [2024-11-20 16:28:26.458152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.241 [2024-11-20 16:28:26.458168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.241 [2024-11-20 16:28:26.458175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.241 [2024-11-20 16:28:26.458363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.241 [2024-11-20 16:28:26.458531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.241 [2024-11-20 16:28:26.458539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.241 [2024-11-20 16:28:26.458545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.241 [2024-11-20 16:28:26.458551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.501 [2024-11-20 16:28:26.470820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.501 [2024-11-20 16:28:26.471194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-11-20 16:28:26.471258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.501 [2024-11-20 16:28:26.471284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.501 [2024-11-20 16:28:26.471870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.501 [2024-11-20 16:28:26.472280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.501 [2024-11-20 16:28:26.472289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.501 [2024-11-20 16:28:26.472295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.501 [2024-11-20 16:28:26.472302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.501 [2024-11-20 16:28:26.483561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.501 [2024-11-20 16:28:26.484018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-11-20 16:28:26.484035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.501 [2024-11-20 16:28:26.484043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.501 [2024-11-20 16:28:26.484223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.501 [2024-11-20 16:28:26.484392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.501 [2024-11-20 16:28:26.484400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.501 [2024-11-20 16:28:26.484407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.501 [2024-11-20 16:28:26.484413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.501 [2024-11-20 16:28:26.496350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.501 [2024-11-20 16:28:26.496773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-11-20 16:28:26.496789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.501 [2024-11-20 16:28:26.496796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.501 [2024-11-20 16:28:26.496956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.501 [2024-11-20 16:28:26.497116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.501 [2024-11-20 16:28:26.497124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.501 [2024-11-20 16:28:26.497130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.501 [2024-11-20 16:28:26.497136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.501 [2024-11-20 16:28:26.509082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.501 [2024-11-20 16:28:26.509520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-11-20 16:28:26.509537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.501 [2024-11-20 16:28:26.509545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.501 [2024-11-20 16:28:26.509713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.501 [2024-11-20 16:28:26.509882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.501 [2024-11-20 16:28:26.509890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.501 [2024-11-20 16:28:26.509896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.501 [2024-11-20 16:28:26.509902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.501 [2024-11-20 16:28:26.521831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.501 [2024-11-20 16:28:26.522261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-11-20 16:28:26.522308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.501 [2024-11-20 16:28:26.522331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.501 [2024-11-20 16:28:26.522914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.501 [2024-11-20 16:28:26.523147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.501 [2024-11-20 16:28:26.523155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.501 [2024-11-20 16:28:26.523164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.501 [2024-11-20 16:28:26.523170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.501 [2024-11-20 16:28:26.534662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.501 [2024-11-20 16:28:26.535088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-11-20 16:28:26.535105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.501 [2024-11-20 16:28:26.535112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.501 [2024-11-20 16:28:26.535296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.501 [2024-11-20 16:28:26.535465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.501 [2024-11-20 16:28:26.535473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.501 [2024-11-20 16:28:26.535479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.501 [2024-11-20 16:28:26.535486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.501 [2024-11-20 16:28:26.547409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.501 [2024-11-20 16:28:26.547836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-11-20 16:28:26.547882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.501 [2024-11-20 16:28:26.547905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.501 [2024-11-20 16:28:26.548414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.501 [2024-11-20 16:28:26.548583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.501 [2024-11-20 16:28:26.548591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.501 [2024-11-20 16:28:26.548597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.501 [2024-11-20 16:28:26.548604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.501 [2024-11-20 16:28:26.560274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.501 [2024-11-20 16:28:26.560618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-11-20 16:28:26.560634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.501 [2024-11-20 16:28:26.560641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.502 [2024-11-20 16:28:26.560800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.502 [2024-11-20 16:28:26.560959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.502 [2024-11-20 16:28:26.560967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.502 [2024-11-20 16:28:26.560973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.502 [2024-11-20 16:28:26.560979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.502 [2024-11-20 16:28:26.573066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.502 [2024-11-20 16:28:26.573417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-11-20 16:28:26.573434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.502 [2024-11-20 16:28:26.573441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.502 [2024-11-20 16:28:26.573609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.502 [2024-11-20 16:28:26.573778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.502 [2024-11-20 16:28:26.573786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.502 [2024-11-20 16:28:26.573792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.502 [2024-11-20 16:28:26.573798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.502 [2024-11-20 16:28:26.585905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.502 [2024-11-20 16:28:26.586316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-11-20 16:28:26.586332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.502 [2024-11-20 16:28:26.586339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.502 [2024-11-20 16:28:26.586499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.502 [2024-11-20 16:28:26.586659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.502 [2024-11-20 16:28:26.586667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.502 [2024-11-20 16:28:26.586673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.502 [2024-11-20 16:28:26.586679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.502 [2024-11-20 16:28:26.598666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.502 [2024-11-20 16:28:26.598994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-11-20 16:28:26.599011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.502 [2024-11-20 16:28:26.599018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.502 [2024-11-20 16:28:26.599177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.502 [2024-11-20 16:28:26.599363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.502 [2024-11-20 16:28:26.599372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.502 [2024-11-20 16:28:26.599379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.502 [2024-11-20 16:28:26.599385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.502 [2024-11-20 16:28:26.611448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.502 [2024-11-20 16:28:26.611844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-11-20 16:28:26.611861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.502 [2024-11-20 16:28:26.611872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.502 [2024-11-20 16:28:26.612040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.502 [2024-11-20 16:28:26.612214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.502 [2024-11-20 16:28:26.612223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.502 [2024-11-20 16:28:26.612229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.502 [2024-11-20 16:28:26.612235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.502 [2024-11-20 16:28:26.624238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.502 [2024-11-20 16:28:26.624585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-11-20 16:28:26.624602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.502 [2024-11-20 16:28:26.624610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.502 [2024-11-20 16:28:26.624778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.502 [2024-11-20 16:28:26.624946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.502 [2024-11-20 16:28:26.624954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.502 [2024-11-20 16:28:26.624960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.502 [2024-11-20 16:28:26.624967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.502 [2024-11-20 16:28:26.637415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.502 [2024-11-20 16:28:26.637696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-11-20 16:28:26.637712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.502 [2024-11-20 16:28:26.637720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.502 [2024-11-20 16:28:26.637893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.502 [2024-11-20 16:28:26.638067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.502 [2024-11-20 16:28:26.638076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.502 [2024-11-20 16:28:26.638082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.502 [2024-11-20 16:28:26.638088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.502 [2024-11-20 16:28:26.650192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.502 [2024-11-20 16:28:26.650554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-11-20 16:28:26.650600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.502 [2024-11-20 16:28:26.650624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.502 [2024-11-20 16:28:26.651218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.502 [2024-11-20 16:28:26.651734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.502 [2024-11-20 16:28:26.651742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.502 [2024-11-20 16:28:26.651748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.502 [2024-11-20 16:28:26.651755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.502 [2024-11-20 16:28:26.663193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.502 [2024-11-20 16:28:26.663499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-11-20 16:28:26.663516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.502 [2024-11-20 16:28:26.663524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.502 [2024-11-20 16:28:26.663697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.502 [2024-11-20 16:28:26.663870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.502 [2024-11-20 16:28:26.663879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.502 [2024-11-20 16:28:26.663885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.502 [2024-11-20 16:28:26.663891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.502 [2024-11-20 16:28:26.675968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.502 [2024-11-20 16:28:26.676332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-11-20 16:28:26.676350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.502 [2024-11-20 16:28:26.676357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.502 [2024-11-20 16:28:26.676526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.502 [2024-11-20 16:28:26.676694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.502 [2024-11-20 16:28:26.676702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.502 [2024-11-20 16:28:26.676709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.502 [2024-11-20 16:28:26.676715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.502 [2024-11-20 16:28:26.688818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.502 [2024-11-20 16:28:26.689157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-11-20 16:28:26.689216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.503 [2024-11-20 16:28:26.689240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.503 [2024-11-20 16:28:26.689754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.503 [2024-11-20 16:28:26.689923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.503 [2024-11-20 16:28:26.689931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.503 [2024-11-20 16:28:26.689941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.503 [2024-11-20 16:28:26.689947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.503 [2024-11-20 16:28:26.701634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.503 [2024-11-20 16:28:26.701965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-11-20 16:28:26.701981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.503 [2024-11-20 16:28:26.701988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.503 [2024-11-20 16:28:26.702156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.503 [2024-11-20 16:28:26.702330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.503 [2024-11-20 16:28:26.702339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.503 [2024-11-20 16:28:26.702345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.503 [2024-11-20 16:28:26.702352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.503 [2024-11-20 16:28:26.714483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.503 [2024-11-20 16:28:26.714785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-11-20 16:28:26.714802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.503 [2024-11-20 16:28:26.714809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.503 [2024-11-20 16:28:26.714977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.503 [2024-11-20 16:28:26.715148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.503 [2024-11-20 16:28:26.715156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.503 [2024-11-20 16:28:26.715162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.503 [2024-11-20 16:28:26.715169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.503 [2024-11-20 16:28:26.727511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.503 [2024-11-20 16:28:26.727849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-11-20 16:28:26.727867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.503 [2024-11-20 16:28:26.727875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.503 [2024-11-20 16:28:26.728049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.503 [2024-11-20 16:28:26.728253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.503 [2024-11-20 16:28:26.728262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.503 [2024-11-20 16:28:26.728269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.503 [2024-11-20 16:28:26.728276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.764 [2024-11-20 16:28:26.740490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.764 [2024-11-20 16:28:26.740833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.764 [2024-11-20 16:28:26.740850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.764 [2024-11-20 16:28:26.740858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.764 [2024-11-20 16:28:26.741028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.764 [2024-11-20 16:28:26.741196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.764 [2024-11-20 16:28:26.741211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.764 [2024-11-20 16:28:26.741217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.764 [2024-11-20 16:28:26.741224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.764 [2024-11-20 16:28:26.753590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.764 [2024-11-20 16:28:26.753879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.764 [2024-11-20 16:28:26.753896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.764 [2024-11-20 16:28:26.753903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.764 [2024-11-20 16:28:26.754077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.764 [2024-11-20 16:28:26.754258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.764 [2024-11-20 16:28:26.754267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.764 [2024-11-20 16:28:26.754273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.764 [2024-11-20 16:28:26.754280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.764 [2024-11-20 16:28:26.766677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.764 [2024-11-20 16:28:26.767036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.764 [2024-11-20 16:28:26.767053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.764 [2024-11-20 16:28:26.767061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.764 [2024-11-20 16:28:26.767239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.764 [2024-11-20 16:28:26.767413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.764 [2024-11-20 16:28:26.767422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.764 [2024-11-20 16:28:26.767428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.764 [2024-11-20 16:28:26.767435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.764 [2024-11-20 16:28:26.779715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.764 [2024-11-20 16:28:26.779975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.764 [2024-11-20 16:28:26.779991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.764 [2024-11-20 16:28:26.780002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.764 [2024-11-20 16:28:26.780170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.764 [2024-11-20 16:28:26.780344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.764 [2024-11-20 16:28:26.780353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.764 [2024-11-20 16:28:26.780360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.764 [2024-11-20 16:28:26.780367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2073961 Killed "${NVMF_APP[@]}" "$@" 00:26:55.764 [2024-11-20 16:28:26.792640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.764 [2024-11-20 16:28:26.792990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.764 [2024-11-20 16:28:26.793008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.764 [2024-11-20 16:28:26.793015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:55.764 [2024-11-20 16:28:26.793183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.764 [2024-11-20 16:28:26.793360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.764 [2024-11-20 16:28:26.793369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.764 [2024-11-20 16:28:26.793375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.764 [2024-11-20 16:28:26.793384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2075244 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2075244 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2075244 ']' 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.764 16:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.764 [2024-11-20 16:28:26.805716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.764 [2024-11-20 16:28:26.806095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.764 [2024-11-20 16:28:26.806118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.764 [2024-11-20 16:28:26.806125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.764 [2024-11-20 16:28:26.806305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.764 [2024-11-20 16:28:26.806479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.764 [2024-11-20 16:28:26.806488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.764 [2024-11-20 16:28:26.806494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.764 [2024-11-20 16:28:26.806500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.764 [2024-11-20 16:28:26.818753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.764 [2024-11-20 16:28:26.819100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.764 [2024-11-20 16:28:26.819116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.764 [2024-11-20 16:28:26.819123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.764 [2024-11-20 16:28:26.819301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.764 [2024-11-20 16:28:26.819476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.764 [2024-11-20 16:28:26.819484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.764 [2024-11-20 16:28:26.819490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.764 [2024-11-20 16:28:26.819496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.764 [2024-11-20 16:28:26.831791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.764 [2024-11-20 16:28:26.832223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.764 [2024-11-20 16:28:26.832240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.764 [2024-11-20 16:28:26.832248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.765 [2024-11-20 16:28:26.832422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.765 [2024-11-20 16:28:26.832597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.765 [2024-11-20 16:28:26.832605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.765 [2024-11-20 16:28:26.832611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.765 [2024-11-20 16:28:26.832618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.765 [2024-11-20 16:28:26.844689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.765 [2024-11-20 16:28:26.845066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.765 [2024-11-20 16:28:26.845083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.765 [2024-11-20 16:28:26.845091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.765 [2024-11-20 16:28:26.845274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.765 [2024-11-20 16:28:26.845458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.765 [2024-11-20 16:28:26.845467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.765 [2024-11-20 16:28:26.845474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.765 [2024-11-20 16:28:26.845481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.765 [2024-11-20 16:28:26.848293] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:26:55.765 [2024-11-20 16:28:26.848333] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.765 [2024-11-20 16:28:26.857679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.765 [2024-11-20 16:28:26.858024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.765 [2024-11-20 16:28:26.858042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.765 [2024-11-20 16:28:26.858049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.765 [2024-11-20 16:28:26.858228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.765 [2024-11-20 16:28:26.858401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.765 [2024-11-20 16:28:26.858410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.765 [2024-11-20 16:28:26.858416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.765 [2024-11-20 16:28:26.858423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.765 [2024-11-20 16:28:26.870718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.765 [2024-11-20 16:28:26.871176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.765 [2024-11-20 16:28:26.871193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.765 [2024-11-20 16:28:26.871207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.765 [2024-11-20 16:28:26.871382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.765 [2024-11-20 16:28:26.871556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.765 [2024-11-20 16:28:26.871565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.765 [2024-11-20 16:28:26.871571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.765 [2024-11-20 16:28:26.871577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.765 [2024-11-20 16:28:26.883731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.765 [2024-11-20 16:28:26.884074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.765 [2024-11-20 16:28:26.884092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.765 [2024-11-20 16:28:26.884101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.765 [2024-11-20 16:28:26.884285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.765 [2024-11-20 16:28:26.884459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.765 [2024-11-20 16:28:26.884468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.765 [2024-11-20 16:28:26.884474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.765 [2024-11-20 16:28:26.884481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.765 [2024-11-20 16:28:26.896746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.765 [2024-11-20 16:28:26.897032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.765 [2024-11-20 16:28:26.897049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.765 [2024-11-20 16:28:26.897057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.765 [2024-11-20 16:28:26.897236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.765 [2024-11-20 16:28:26.897411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.765 [2024-11-20 16:28:26.897420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.765 [2024-11-20 16:28:26.897428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.765 [2024-11-20 16:28:26.897434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.765 [2024-11-20 16:28:26.909825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.765 [2024-11-20 16:28:26.910124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.765 [2024-11-20 16:28:26.910141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.765 [2024-11-20 16:28:26.910149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.765 [2024-11-20 16:28:26.910328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.765 [2024-11-20 16:28:26.910502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.765 [2024-11-20 16:28:26.910510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.765 [2024-11-20 16:28:26.910517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.765 [2024-11-20 16:28:26.910523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.765 [2024-11-20 16:28:26.922822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.765 [2024-11-20 16:28:26.923115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.765 [2024-11-20 16:28:26.923132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.765 [2024-11-20 16:28:26.923140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.765 [2024-11-20 16:28:26.923318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.765 [2024-11-20 16:28:26.923492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.765 [2024-11-20 16:28:26.923504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.765 [2024-11-20 16:28:26.923511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.765 [2024-11-20 16:28:26.923517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.765 [2024-11-20 16:28:26.927163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:55.765 [2024-11-20 16:28:26.935875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.765 [2024-11-20 16:28:26.936225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.765 [2024-11-20 16:28:26.936245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.765 [2024-11-20 16:28:26.936253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.765 [2024-11-20 16:28:26.936427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.765 [2024-11-20 16:28:26.936602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.765 [2024-11-20 16:28:26.936611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.765 [2024-11-20 16:28:26.936618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.765 [2024-11-20 16:28:26.936625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.765 [2024-11-20 16:28:26.948826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.765 [2024-11-20 16:28:26.949296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.765 [2024-11-20 16:28:26.949315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.765 [2024-11-20 16:28:26.949323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.765 [2024-11-20 16:28:26.949505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.765 [2024-11-20 16:28:26.949675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.765 [2024-11-20 16:28:26.949683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.765 [2024-11-20 16:28:26.949690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.765 [2024-11-20 16:28:26.949697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.766 [2024-11-20 16:28:26.961768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.766 [2024-11-20 16:28:26.962066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.766 [2024-11-20 16:28:26.962083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.766 [2024-11-20 16:28:26.962090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.766 [2024-11-20 16:28:26.962269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.766 [2024-11-20 16:28:26.962442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.766 [2024-11-20 16:28:26.962451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.766 [2024-11-20 16:28:26.962458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.766 [2024-11-20 16:28:26.962469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.766 [2024-11-20 16:28:26.969109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.766 [2024-11-20 16:28:26.969133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.766 [2024-11-20 16:28:26.969140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.766 [2024-11-20 16:28:26.969146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.766 [2024-11-20 16:28:26.969151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.766 [2024-11-20 16:28:26.970491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.766 [2024-11-20 16:28:26.970595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.766 [2024-11-20 16:28:26.970597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.766 [2024-11-20 16:28:26.974763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.766 [2024-11-20 16:28:26.975119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.766 [2024-11-20 16:28:26.975138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.766 [2024-11-20 16:28:26.975147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.766 [2024-11-20 16:28:26.975327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.766 [2024-11-20 16:28:26.975502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.766 [2024-11-20 16:28:26.975511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.766 [2024-11-20 16:28:26.975518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.766 [2024-11-20 16:28:26.975525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.766 [2024-11-20 16:28:26.987799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.766 [2024-11-20 16:28:26.988105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.766 [2024-11-20 16:28:26.988124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:55.766 [2024-11-20 16:28:26.988132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:55.766 [2024-11-20 16:28:26.988313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:55.766 [2024-11-20 16:28:26.988488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.766 [2024-11-20 16:28:26.988497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.766 [2024-11-20 16:28:26.988504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.766 [2024-11-20 16:28:26.988512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.027 [2024-11-20 16:28:27.000873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.027 [2024-11-20 16:28:27.001285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.027 [2024-11-20 16:28:27.001309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.027 [2024-11-20 16:28:27.001318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.027 [2024-11-20 16:28:27.001501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.027 [2024-11-20 16:28:27.001676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.027 [2024-11-20 16:28:27.001685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.027 [2024-11-20 16:28:27.001692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.027 [2024-11-20 16:28:27.001699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.027 [2024-11-20 16:28:27.013947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.027 [2024-11-20 16:28:27.014320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.027 [2024-11-20 16:28:27.014341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.027 [2024-11-20 16:28:27.014350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.027 [2024-11-20 16:28:27.014525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.027 [2024-11-20 16:28:27.014700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.027 [2024-11-20 16:28:27.014709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.027 [2024-11-20 16:28:27.014716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.027 [2024-11-20 16:28:27.014723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.027 [2024-11-20 16:28:27.026978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.027 [2024-11-20 16:28:27.027339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.027 [2024-11-20 16:28:27.027358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.027 [2024-11-20 16:28:27.027367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.027 [2024-11-20 16:28:27.027542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.027 [2024-11-20 16:28:27.027717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.027 [2024-11-20 16:28:27.027726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.027 [2024-11-20 16:28:27.027733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.027 [2024-11-20 16:28:27.027740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.027 [2024-11-20 16:28:27.040000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.027 [2024-11-20 16:28:27.040432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.027 [2024-11-20 16:28:27.040450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.027 [2024-11-20 16:28:27.040458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.027 [2024-11-20 16:28:27.040632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.027 [2024-11-20 16:28:27.040807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.027 [2024-11-20 16:28:27.040821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.027 [2024-11-20 16:28:27.040828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.027 [2024-11-20 16:28:27.040836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.027 [2024-11-20 16:28:27.053057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.027 [2024-11-20 16:28:27.053496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.027 [2024-11-20 16:28:27.053515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.027 [2024-11-20 16:28:27.053523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.027 [2024-11-20 16:28:27.053697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.027 [2024-11-20 16:28:27.053872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.027 [2024-11-20 16:28:27.053880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.027 [2024-11-20 16:28:27.053887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.027 [2024-11-20 16:28:27.053894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.027 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.027 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:56.027 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:56.027 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:56.027 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.027 [2024-11-20 16:28:27.066336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.027 [2024-11-20 16:28:27.066714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.027 [2024-11-20 16:28:27.066732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.027 [2024-11-20 16:28:27.066740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.027 [2024-11-20 16:28:27.066914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.027 [2024-11-20 16:28:27.067090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.027 [2024-11-20 16:28:27.067098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.027 [2024-11-20 16:28:27.067105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.027 [2024-11-20 16:28:27.067111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.027 [2024-11-20 16:28:27.079384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.027 [2024-11-20 16:28:27.079671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.027 [2024-11-20 16:28:27.079689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.027 [2024-11-20 16:28:27.079697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.027 [2024-11-20 16:28:27.079871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.027 [2024-11-20 16:28:27.080049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.027 [2024-11-20 16:28:27.080059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.027 [2024-11-20 16:28:27.080065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.027 [2024-11-20 16:28:27.080072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.027 [2024-11-20 16:28:27.092507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.027 [2024-11-20 16:28:27.092846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.027 [2024-11-20 16:28:27.092863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.027 [2024-11-20 16:28:27.092871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.027 [2024-11-20 16:28:27.093043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.027 [2024-11-20 16:28:27.093223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.027 [2024-11-20 16:28:27.093232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.027 [2024-11-20 16:28:27.093239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.027 [2024-11-20 16:28:27.093246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.027 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.027 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:56.027 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.027 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.027 [2024-11-20 16:28:27.105511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.027 [2024-11-20 16:28:27.105799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.027 [2024-11-20 16:28:27.105816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.027 [2024-11-20 16:28:27.105824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.027 [2024-11-20 16:28:27.105997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.027 [2024-11-20 16:28:27.106171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.027 [2024-11-20 16:28:27.106179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.027 [2024-11-20 16:28:27.106186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.027 [2024-11-20 16:28:27.106192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.027 [2024-11-20 16:28:27.106514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.027 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.027 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:56.027 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.027 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.027 [2024-11-20 16:28:27.118609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.027 [2024-11-20 16:28:27.118964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.027 [2024-11-20 16:28:27.118980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.027 [2024-11-20 16:28:27.118987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.027 [2024-11-20 16:28:27.119161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.027 [2024-11-20 16:28:27.119340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.027 [2024-11-20 16:28:27.119349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.027 [2024-11-20 16:28:27.119356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.027 [2024-11-20 16:28:27.119362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.027 [2024-11-20 16:28:27.131625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.027 [2024-11-20 16:28:27.132033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.027 [2024-11-20 16:28:27.132050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.027 [2024-11-20 16:28:27.132057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.027 [2024-11-20 16:28:27.132237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.027 [2024-11-20 16:28:27.132411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.027 [2024-11-20 16:28:27.132420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.027 [2024-11-20 16:28:27.132426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.027 [2024-11-20 16:28:27.132433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.028 [2024-11-20 16:28:27.144686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.028 [2024-11-20 16:28:27.145138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.028 [2024-11-20 16:28:27.145156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.028 [2024-11-20 16:28:27.145164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.028 [2024-11-20 16:28:27.145345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.028 [2024-11-20 16:28:27.145519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.028 [2024-11-20 16:28:27.145528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.028 [2024-11-20 16:28:27.145534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.028 [2024-11-20 16:28:27.145541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.028 Malloc0 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.028 [2024-11-20 16:28:27.157799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.028 [2024-11-20 16:28:27.158224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.028 [2024-11-20 16:28:27.158242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.028 [2024-11-20 16:28:27.158250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.028 [2024-11-20 16:28:27.158427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.028 [2024-11-20 16:28:27.158600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.028 [2024-11-20 16:28:27.158608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.028 [2024-11-20 16:28:27.158615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.028 [2024-11-20 16:28:27.158623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.028 [2024-11-20 16:28:27.170879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.028 [2024-11-20 16:28:27.171238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.028 [2024-11-20 16:28:27.171255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06500 with addr=10.0.0.2, port=4420 00:26:56.028 [2024-11-20 16:28:27.171262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06500 is same with the state(6) to be set 00:26:56.028 [2024-11-20 16:28:27.171436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06500 (9): Bad file descriptor 00:26:56.028 [2024-11-20 16:28:27.171609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.028 [2024-11-20 16:28:27.171618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.028 [2024-11-20 16:28:27.171624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.028 [2024-11-20 16:28:27.171630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.028 [2024-11-20 16:28:27.171745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.028 16:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2074225 00:26:56.028 [2024-11-20 16:28:27.183909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.028 [2024-11-20 16:28:27.248612] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:57.219 4830.00 IOPS, 18.87 MiB/s [2024-11-20T15:28:29.524Z] 5758.00 IOPS, 22.49 MiB/s [2024-11-20T15:28:30.457Z] 6468.12 IOPS, 25.27 MiB/s [2024-11-20T15:28:31.389Z] 6996.44 IOPS, 27.33 MiB/s [2024-11-20T15:28:32.326Z] 7458.70 IOPS, 29.14 MiB/s [2024-11-20T15:28:33.702Z] 7807.64 IOPS, 30.50 MiB/s [2024-11-20T15:28:34.357Z] 8107.58 IOPS, 31.67 MiB/s [2024-11-20T15:28:35.732Z] 8355.77 IOPS, 32.64 MiB/s [2024-11-20T15:28:36.666Z] 8574.21 IOPS, 33.49 MiB/s 00:27:05.432 Latency(us) 00:27:05.432 [2024-11-20T15:28:36.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.432 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:05.432 Verification LBA range: start 0x0 length 0x4000 00:27:05.432 Nvme1n1 : 15.00 8768.42 34.25 11361.50 0.00 6338.98 581.24 16976.94 00:27:05.432 [2024-11-20T15:28:36.666Z] =================================================================================================================== 00:27:05.432 [2024-11-20T15:28:36.666Z] Total : 8768.42 34.25 11361.50 0.00 6338.98 581.24 16976.94 00:27:05.432 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:05.432 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.432 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.432 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.432 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.432 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:05.432 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:05.433 rmmod nvme_tcp 00:27:05.433 rmmod nvme_fabrics 00:27:05.433 rmmod nvme_keyring 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2075244 ']' 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2075244 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2075244 ']' 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2075244 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2075244 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2075244' 00:27:05.433 killing process with pid 2075244 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2075244 00:27:05.433 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2075244 00:27:05.692 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:05.692 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:05.692 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:05.692 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:05.692 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:05.692 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:05.692 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:05.692 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:05.692 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:05.692 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.692 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.692 16:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.229 16:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:08.229 00:27:08.229 real 0m26.240s 00:27:08.229 user 1m1.412s 00:27:08.229 sys 0m6.774s 00:27:08.229 16:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.229 16:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.229 ************************************ 00:27:08.229 END TEST nvmf_bdevperf 00:27:08.229 ************************************ 00:27:08.229 16:28:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:08.229 16:28:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:08.229 16:28:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:08.229 16:28:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.229 ************************************ 00:27:08.229 START TEST nvmf_target_disconnect 00:27:08.229 ************************************ 00:27:08.229 16:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:08.229 * Looking for test storage... 00:27:08.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:08.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.229 --rc genhtml_branch_coverage=1 00:27:08.229 --rc genhtml_function_coverage=1 00:27:08.229 --rc genhtml_legend=1 00:27:08.229 --rc geninfo_all_blocks=1 00:27:08.229 --rc geninfo_unexecuted_blocks=1 00:27:08.229 00:27:08.229 ' 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:08.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.229 --rc genhtml_branch_coverage=1 00:27:08.229 --rc genhtml_function_coverage=1 00:27:08.229 --rc genhtml_legend=1 00:27:08.229 --rc geninfo_all_blocks=1 00:27:08.229 --rc geninfo_unexecuted_blocks=1 00:27:08.229 00:27:08.229 ' 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:08.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.229 --rc genhtml_branch_coverage=1 00:27:08.229 --rc genhtml_function_coverage=1 00:27:08.229 --rc genhtml_legend=1 00:27:08.229 --rc geninfo_all_blocks=1 00:27:08.229 --rc geninfo_unexecuted_blocks=1 00:27:08.229 00:27:08.229 ' 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:08.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.229 --rc genhtml_branch_coverage=1 00:27:08.229 --rc genhtml_function_coverage=1 00:27:08.229 --rc genhtml_legend=1 00:27:08.229 --rc geninfo_all_blocks=1 00:27:08.229 --rc geninfo_unexecuted_blocks=1 00:27:08.229 00:27:08.229 ' 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:08.229 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:08.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:08.230 16:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:14.801 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.801 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:14.801 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:14.801 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:14.801 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:14.801 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:14.801 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:14.801 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:14.801 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:14.801 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:14.802 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:14.802 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:14.802 Found net devices under 0000:86:00.0: cvl_0_0 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:14.802 Found net devices under 0000:86:00.1: cvl_0_1 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:14.802 16:28:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.802 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.802 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.802 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:14.802 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:14.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:27:14.802 00:27:14.802 --- 10.0.0.2 ping statistics --- 00:27:14.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.802 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:27:14.802 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:27:14.802 00:27:14.802 --- 10.0.0.1 ping statistics --- 00:27:14.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.802 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:27:14.802 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.802 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:14.802 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:14.802 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.802 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:14.802 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:14.803 ************************************ 00:27:14.803 START TEST nvmf_target_disconnect_tc1 00:27:14.803 ************************************ 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:14.803 [2024-11-20 16:28:45.213573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.803 [2024-11-20 16:28:45.213618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ccab0 with addr=10.0.0.2, port=4420 00:27:14.803 [2024-11-20 16:28:45.213637] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:14.803 [2024-11-20 16:28:45.213645] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:14.803 [2024-11-20 16:28:45.213651] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:14.803 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:14.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:14.803 Initializing NVMe Controllers 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:14.803 00:27:14.803 real 0m0.117s 00:27:14.803 user 0m0.041s 00:27:14.803 sys 0m0.076s 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:14.803 ************************************ 00:27:14.803 END TEST nvmf_target_disconnect_tc1 00:27:14.803 ************************************ 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:14.803 ************************************ 00:27:14.803 START TEST nvmf_target_disconnect_tc2 00:27:14.803 ************************************ 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2080335 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2080335 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2080335 ']' 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.803 [2024-11-20 16:28:45.347511] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:27:14.803 [2024-11-20 16:28:45.347560] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.803 [2024-11-20 16:28:45.425546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:14.803 [2024-11-20 16:28:45.467237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.803 [2024-11-20 16:28:45.467273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.803 [2024-11-20 16:28:45.467280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.803 [2024-11-20 16:28:45.467286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.803 [2024-11-20 16:28:45.467291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.803 [2024-11-20 16:28:45.468776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:14.803 [2024-11-20 16:28:45.468887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:14.803 [2024-11-20 16:28:45.468991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:14.803 [2024-11-20 16:28:45.468992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.803 Malloc0 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.803 [2024-11-20 16:28:45.633754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.803 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.804 [2024-11-20 16:28:45.662737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2080507 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:14.804 16:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:16.724 16:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2080335 00:27:16.724 16:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Write completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Write completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Write completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Write completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 [2024-11-20 16:28:47.691174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:16.724 Write completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Write completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Write completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Write completed with error (sct=0, sc=8) 00:27:16.724 starting I/O failed 00:27:16.724 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 [2024-11-20 16:28:47.691380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Read completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 Write completed with error (sct=0, sc=8) 00:27:16.725 starting I/O failed 00:27:16.725 [2024-11-20 16:28:47.691568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.725 [2024-11-20 16:28:47.691745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.691767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.691908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.691918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.692005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.692015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.692151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.692161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.692406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.692417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.692582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.692593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.692718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.692728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.692826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.692835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.693061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.693071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.693224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.693236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.693389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.693399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.693489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.693498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.693625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.693635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.693708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.693717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.693797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.693806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.693980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.693990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.694125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.694135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.694219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.694229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 16:28:47.694297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 16:28:47.694307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.694384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.694393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.694477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.694487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.694622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.694631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.694756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.694765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.694912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.694923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.695057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.695067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.695135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.695145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.695228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.695239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.695299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.695308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.695382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.695391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.695599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.695609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.695667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.695676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.695809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.695822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.695900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.695909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.695970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.695980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.696065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.696075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.696153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.696163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.696236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.696246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.696372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.696381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.696456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.696465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.696556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.696566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.696636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.696645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.696706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.696716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.696863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.696872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.696930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.696940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.697013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.697022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.697167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.697176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.697256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.697266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.697401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.697410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.697480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.697490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.697617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.697626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.697682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.697692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.697828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.697838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.697906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.697916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.697974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.697983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.698050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.698059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.698121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 16:28:47.698130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 16:28:47.698189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.698198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.698273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.698284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.698460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.698469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.698525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.698534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.698594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.698603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.698737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.698746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.698807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.698816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.698958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.698967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.699035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.699044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.699191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.699207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.699267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.699276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.699354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.699363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.699432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.699441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.699520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.699529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.699584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.699593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.699647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.699659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.699721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.699731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.699780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.699789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.699857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.699867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.699921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.699931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.700089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.700098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.700161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.700171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.700246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.700257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.700333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.700342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.700431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.700440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.700498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.700508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.700578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.700587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.700660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.700670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.700743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.700752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.700829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.700839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.701036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.701045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.701115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.701125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.701267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.701277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.701366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.701376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.701452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.701462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.701591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.701600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.701678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 16:28:47.701687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 16:28:47.701767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.701776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.701837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.701847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.701992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.702002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.702129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.702138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.702281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.702292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.702356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.702365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.702430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.702446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.702608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.702618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.702746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.702755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.702903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.702912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.703060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.703069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.703142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.703154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.703236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.703250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.703332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.703345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.703411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.703423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.703503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.703515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.703582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.703594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.703729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.703743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.703929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.703967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.704079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.704111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.704233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.704268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.704445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.704476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.704665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.704697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.704809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.704840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.705024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.705037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.705122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.705135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.705221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.705236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.705323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.705337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.705404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.705417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.705497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.705510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.705592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.705605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.705669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.705683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.705778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.705792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.705921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.705935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.706079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.706093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.706165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.706179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.706264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.706279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 16:28:47.706414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 16:28:47.706427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.706510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.706523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.706611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.706624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.706760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.706773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.706836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.706848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.706915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.706929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.707058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.707071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.707146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.707160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.707310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.707325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.707536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.707549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.707620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.707634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.707847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.707861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.707934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.707947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.708085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.708098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.708160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.708173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.708375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.708389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.708524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.708537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.708606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.708619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.708773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.708786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.708861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.708875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.709025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.709038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.709101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.709117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.709260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.709274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.709433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.709447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.709535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.709548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.709647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.709660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.709724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.709737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.709893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.709906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.710003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.710016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.710091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.710104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.710238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.710252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.710335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.710348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 16:28:47.710451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 16:28:47.710464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.710616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.710629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.710718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.710732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.710942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.710955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.711090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.711103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.711175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.711188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.711288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.711316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.711456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.711470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.711557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.711570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.711646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.711659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.711734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.711746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.711829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.711842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.711923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.711936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.712065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.712078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.712160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.712173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.712318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.712352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.712485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.712517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.712621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.712653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.712776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.712807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.712913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.712945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.713064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.713095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.713221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.713254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.713366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.713397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.713525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.713557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.713680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.713717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.713996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.714027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.714149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.714180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.714380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.714415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.714616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.714648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.714829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.714866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.715042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.715075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.715369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.715403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.715545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.715577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.715707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.715739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.715858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.715890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.715996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.716028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.716266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.716300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.716473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 16:28:47.716505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 16:28:47.716758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.716790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.716993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.717025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.717276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.717310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.717497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.717529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.717646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.717684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.717936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.717967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.718147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.718178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.718295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.718327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.718529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.718565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.718696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.718728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.718989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.719020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.719237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.719270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.719389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.719422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.719709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.719741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.719880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.719911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.720082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.720114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.720240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.720274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.720460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.720492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.720719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.720794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.721054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.721091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.721285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.721318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.721608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.721640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.721928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.721961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.722268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.722302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.722604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.722637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.722940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.722972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.723158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.723191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.723470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.723502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.723760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.723792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.723915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.723947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.724161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.724192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.724396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.724436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.724643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.724674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.724796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.724829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.725068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.725100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.725230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.725264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.725470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.725501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.725743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 16:28:47.725774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 16:28:47.725961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.725994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.726167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.726199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.726407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.726440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.726706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.726738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.727010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.727042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.727237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.727272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.727406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.727438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.727617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.727650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.727846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.727878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.728009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.728041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.728226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.728259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.728433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.728465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.728641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.728673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.728846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.728878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.729002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.729033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.729239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.729273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.729476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.729508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.729712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.729745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.729862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.729894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.730098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.730131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.730278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.730321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.730500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.730534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.730730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.730763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.730973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.731005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.731129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.731163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.731419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.731453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.731722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.731755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.732024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.732056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.732197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.732239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.732442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.732475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.732682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.732714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.732908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.732939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.733119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.733151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.733334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.733367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.733551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.733583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.733696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.733728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.733966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.733998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.734176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 16:28:47.734218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 16:28:47.734402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.734434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.734622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.734655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.734776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.734807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.734988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.735021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.735152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.735184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.735319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.735352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.735526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.735557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.735745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.735777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.735949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.735981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.736117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.736156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.736354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.736386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.736506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.736539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.736646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.736678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.736919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.736951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.737057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.737089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.737271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.737305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.737488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.737521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.737658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.737690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.737810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.737842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.737969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.738000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.738132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.738165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.738356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.738391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.738562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.738595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.738732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.738765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.738890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.738922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.739131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.739163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.739346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.739379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.739563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.739595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.739710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.739742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.739874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.739907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.740108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.740140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.740256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.740290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.740530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.740562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 16:28:47.740669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 16:28:47.740701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.740941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.740973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.741104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.741136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.741258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.741298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.741473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.741506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.741627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.741659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.741776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.741808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.741983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.742015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.742198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.742246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.742492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.742524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.742736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.742768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.742941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.742973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.743097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.743129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.743234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.743268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.743510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.743541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.743804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.743836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.743944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.743976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.744167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.744200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.744411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.744443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.744685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.744716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.744901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.744933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.745173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.745213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.745389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.745421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.745679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.745711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.745835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.745866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.746052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.746083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.746254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.746288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.746464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.746496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.746686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.746719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.746826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.746858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.747067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.747105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.747288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.747321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.747573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.747605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.747791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.747823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.748062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.748093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.748355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.748389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.748580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.748612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.748797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.748830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 16:28:47.749004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 16:28:47.749035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.749438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.749475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.749681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.749713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.749829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.749860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.750042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.750075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.750252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.750285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.750485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.750518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.750774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.750806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.750978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.751010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.751124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.751156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.751430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.751464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.751641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.751673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.751857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.751889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.752008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.752040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.752224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.752259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.752386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.752417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.752628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.752660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.752831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.752863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.753051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.753083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.753261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.753294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.753486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.753517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.753635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.753666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.753856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.753888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.754149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.754182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.754487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.754519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.754805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.754837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.755028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.755058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.755297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.755329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.755602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.755633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.755819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.755851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.756111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.756142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.756334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.756367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.756624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.756655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.756971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.757042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.757279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.757318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.757569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.757601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.757788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.757820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.757954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.757986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 16:28:47.758183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 16:28:47.758226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.758403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.758435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.758564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.758596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.758712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.758744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.758956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.758987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.759112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.759144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.759384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.759416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.759532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.759564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.759805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.759846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.760027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.760060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.760271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.760305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.760491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.760522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.760711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.760743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.761009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.761040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.761151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.761183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.761317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.761350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.761601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.761632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.761739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.761771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.761957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.761989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.762176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.762219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.762418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.762449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.762629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.762661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.762924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.762955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.763143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.763175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.763367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.763400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.763610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.763642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.763850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.763881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.764058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.764090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.764308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.764342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.764538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.764570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.764777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.764809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.764938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.764970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.765156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.765187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.765436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.765469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.765665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.765696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.765911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.765944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.766164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.766195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.766339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 16:28:47.766371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 16:28:47.766543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.766575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.766694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.766727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.766968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.767000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.767130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.767162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.767416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.767449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.767623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.767655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.767863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.767896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.768021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.768052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.768292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.768325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.768509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.768540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.768714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.768747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.768929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.768962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.769229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.769263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.769400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.769431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.769616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.769648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.769778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.769810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.769941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.769973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.770080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.770111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.770248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.770282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.770553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.770584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.770850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.770882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.771169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.771225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.771495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.771528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.771717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.771749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.771941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.771973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.772175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.772219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.772480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.772511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.772633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.772665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.772878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.772910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.773090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.773121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.773373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.773410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.773529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.773560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.773748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.773780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.774017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.774049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.774152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.774183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.774386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.774418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.774592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.774623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.774753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 16:28:47.774792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 16:28:47.775058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.775089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.775339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.775372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.775625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.775657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.775834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.775864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.776106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.776138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.776327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.776361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.776546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.776577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.776828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.776861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.777049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.777080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.777268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.777301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.777501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.777532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.777658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.777689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.777885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.777917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.778091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.778124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.778296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.778329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.778532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.778563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.778748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.778780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.778954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.778986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.779196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.779238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.779364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.779397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.779594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.779626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.779816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.779848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.779959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.779990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.780183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.780224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.780435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.780468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.780576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.780608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.780835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.780868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.781057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.781089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.781261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.781295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.781501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.781532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.781727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.781759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.782017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.782049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.782289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.782323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 16:28:47.782537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 16:28:47.782570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.782824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.782856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.783092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.783124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.783301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.783335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.783530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.783562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.783868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.783900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.784017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.784055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.784325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.784358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.784569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.784600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.784842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.784874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.785056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.785088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.785282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.785316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.785546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.785577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.785697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.785729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.785993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.786025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.786138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.786169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.786381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.786415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.786592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.786624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.786871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.786903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.787024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.787056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.787185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.787231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.787496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.787529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.787701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.787733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.788018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.788050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.788264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.788298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.788495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.788527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.788723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.788755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.788965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.788996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.789214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.789249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.789370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.789403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.789529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.789560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.789805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.789838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.789964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.789995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.790119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.790152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.790422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.790455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.790573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.790605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.790795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.790826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.791042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.791075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 16:28:47.791258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.739 [2024-11-20 16:28:47.791292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.791405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.791436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.791607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.791639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.791760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.791791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.792036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.792068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.792256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.792291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.792468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.792499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.792691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.792723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.792982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.793020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.793191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.793231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.793415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.793447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.793581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.793613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.793852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.793883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.794072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.794104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.794284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.794317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.794506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.794538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.794713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.794745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.794852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.794884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.795128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.795159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.795371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.795405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.795518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.795552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.795749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.795782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.795901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.795934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.796117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.796151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.796406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.796440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.796610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.796642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.796924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.796956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.797133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.797165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.797351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.797385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.797589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.797620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.797744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.797777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.797950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.797982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.798167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.798200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.798333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.798365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.798602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.798634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.798770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.798802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.798911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.798943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.799141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.740 [2024-11-20 16:28:47.799173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 16:28:47.799346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.799417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.799699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.799735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.800020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.800054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.800248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.800282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.800521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.800553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.800821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.800854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.801044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.801076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.801196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.801241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.801442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.801476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.801660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.801693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.801883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.801924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.802167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.802200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.802413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.802446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.802620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.802652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.802870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.802903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.803076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.803109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.803299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.803333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.803450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.803483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.803711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.803744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.803948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.803981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.804107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.804138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.804338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.804373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.804576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.804609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.804784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.804816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.805006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.805040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.805166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.805199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.805488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.805521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.805785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.805817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.805922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.805955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.806148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.806180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.806373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.806407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.806547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.806580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.806694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.806726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.806907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.806939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.807139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.807172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.807362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.807394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.807572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.807605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 16:28:47.807735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.741 [2024-11-20 16:28:47.807767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.807977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.808009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.808226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.808259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.808497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.808530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.808633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.808665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.808846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.808879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.809117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.809149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.809328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.809363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.809548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.809580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.809765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.809798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.809979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.810011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.810197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.810239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.810476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.810509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.810767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.810805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.811001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.811033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.811270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.811303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.811498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.811530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.811706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.811739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.811847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.811878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.812119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.812152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.812371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.812406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.812546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.812579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.812824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.812856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.813112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.813145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.813344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.813378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.813552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.813584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.813881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.813913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.814105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.814138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.814353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.814386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.814570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.814603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.814847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.814879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.815069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.815102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.815290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.815322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.815571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.815603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.815774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.815806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.816047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.816079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.816276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.816311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.816574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.816606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.742 [2024-11-20 16:28:47.816797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.742 [2024-11-20 16:28:47.816829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.742 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.817013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.817046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.817238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.817271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.817533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.817565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.817699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.817732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.817910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.817942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.818130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.818162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.818409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.818443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.818688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.818720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.818963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.818995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.819189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.819233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.819360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.819393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.819648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.819680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.819954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.819986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.820127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.820159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.820429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.820468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.820640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.820672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.820790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.820821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.821084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.821117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.821387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.821420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.821622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.821653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.821836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.821867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.822065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.822097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.822222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.822255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.822442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.822474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.822659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.822690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.822815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.822847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.823038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.823071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.823244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.823278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.823476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.823509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.823713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.823745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.823917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.743 [2024-11-20 16:28:47.823949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.743 qpair failed and we were unable to recover it. 00:27:16.743 [2024-11-20 16:28:47.824083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.824114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.824300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.824334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.824601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.824633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.824809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.824841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.825132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.825163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.825311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.825344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.825471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.825504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.825627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.825659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.825841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.825873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.826067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.826099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.826225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.826260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.826372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.826405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.826524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.826557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.826671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.826703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.826940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.826972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.827182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.827241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.827352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.827384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.827648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.827681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.827811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.827843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.828108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.828141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.828325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.828359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.828483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.828515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.828633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.828665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.828925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.828963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.829151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.829183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.829429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.829462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.829638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.829671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.829846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.829877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.830075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.830108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.830278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.830322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.830565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.830598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.830815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.830848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.831041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.831073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.831314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.831348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.831611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.744 [2024-11-20 16:28:47.831643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.744 qpair failed and we were unable to recover it. 00:27:16.744 [2024-11-20 16:28:47.831831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.831863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.832038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.832070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.832190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.832232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.832338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.832371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.832639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.832670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.832844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.832876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.833001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.833033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.833284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.833317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.833491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.833523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.833724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.833757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.833951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.833983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.834227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.834261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.834454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.834487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.834743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.834775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.834984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.835017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.835247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.835282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.835493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.835526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.835800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.835831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.836005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.836036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.836165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.836197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.836419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.836453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.836693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.836724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.836845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.836876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.837113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.837145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.837354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.837387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.837602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.837634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.837745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.837776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.837986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.838019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.838144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.838182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.838412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.838445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.838714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.838746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.838996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.839028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.839224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.839258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.839448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.839481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.839696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.839726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.839838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.839871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.840048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.840080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.840341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.745 [2024-11-20 16:28:47.840375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.745 qpair failed and we were unable to recover it. 00:27:16.745 [2024-11-20 16:28:47.840615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.840648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.840843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.840875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.841057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.841089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.841302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.841335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.841528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.841560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.841748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.841780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.841972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.842004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.842242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.842277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.842540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.842571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.842817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.842849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.843031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.843063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.843223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.843257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.843459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.843491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.843704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.843737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.843916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.843948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.844215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.844248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.844438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.844471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.844661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.844732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.844950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.844986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.845270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.845307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.845492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.845525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.845715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.845747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.845930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.845962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.846171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.846216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.846399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.846432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.846621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.846653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.846853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.846886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.847080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.847112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.847360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.847393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.847565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.847596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.847834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.847865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.848131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.848164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.848352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.848384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.848498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.848530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.848794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.848826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.849028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.849061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.849272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.849305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.849429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.746 [2024-11-20 16:28:47.849462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.746 qpair failed and we were unable to recover it. 00:27:16.746 [2024-11-20 16:28:47.849648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.849680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.849796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.849827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.850095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.850127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.850250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.850283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.850398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.850430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.850630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.850661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.850847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.850884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.851089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.851122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.851291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.851325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.851445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.851477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.851714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.851746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.851937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.851968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.852263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.852295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.852563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.852595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.852781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.852812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.852950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.852981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.853166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.853199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.853335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.853369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.853560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.853591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.853774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.853807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.854055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.854087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.854284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.854317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.854488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.854521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.854704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.854735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.854942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.854975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.855233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.855265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.855391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.855423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.855756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.855787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.855972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.856004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.856223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.856261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.856452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.856484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.856670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.856703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.856945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.856976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.747 qpair failed and we were unable to recover it. 00:27:16.747 [2024-11-20 16:28:47.857221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.747 [2024-11-20 16:28:47.857260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.857448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.857481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.857689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.857721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.857989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.858020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.858153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.858185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.858436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.858469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.858705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.858737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.859015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.859048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.859237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.859271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.859397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.859429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.859628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.859659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.859786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.859818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.859944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.859976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.860256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.860288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.860579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.860612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.860798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.860830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.861006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.861037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.861230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.861263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.861450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.861482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.861603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.861635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.861892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.861924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.862141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.862173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.862420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.862453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.862698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.862730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.862863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.862894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.863163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.863195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.863330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.863364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.863603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.863640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.863834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.863866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.864061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.864092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.864223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.864257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.864497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.864530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.864797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.864828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.864965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.864997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.865189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.865232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.865363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.865395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.865511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.865542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.865670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.865702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.748 qpair failed and we were unable to recover it. 00:27:16.748 [2024-11-20 16:28:47.865877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.748 [2024-11-20 16:28:47.865908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.866145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.866177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.866313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.866346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.866537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.866569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.866770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.866802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.866926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.866958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.867068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.867099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.867256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.867290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.867490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.867523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.867780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.867812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.868077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.868109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.868327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.868359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.868607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.868638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.868879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.868911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.869100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.869131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.869325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.869359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.869621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.869652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.869766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.869798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.869981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.870012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.870125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.870156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.870471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.870505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.870612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.870645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.870900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.870931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.871172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.871224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.871411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.871442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.871708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.871740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.871918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.871950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.872075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.872107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.872348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.872381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 [2024-11-20 16:28:47.872581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.749 [2024-11-20 16:28:47.872612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.749 qpair failed and we were unable to recover it. 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Write completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Write completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Write completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Write completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Write completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.749 Read completed with error (sct=0, sc=8) 00:27:16.749 starting I/O failed 00:27:16.750 Write completed with error (sct=0, sc=8) 00:27:16.750 starting I/O failed 00:27:16.750 Write completed with error (sct=0, sc=8) 00:27:16.750 starting I/O failed 00:27:16.750 Read completed with error (sct=0, sc=8) 00:27:16.750 starting I/O failed 00:27:16.750 Write completed with error (sct=0, sc=8) 00:27:16.750 starting I/O failed 00:27:16.750 Read completed with error (sct=0, sc=8) 00:27:16.750 starting I/O failed 00:27:16.750 Write completed with error (sct=0, sc=8) 00:27:16.750 starting I/O failed 00:27:16.750 [2024-11-20 16:28:47.873274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.750 [2024-11-20 16:28:47.873529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.873585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.873836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.873870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.874054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.874086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.874333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.874367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.874487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.874519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.874738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.874769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.874955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.874987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.875277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.875310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.875571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.875603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.875781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.875812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.876086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.876117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.876356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.876389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.876563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.876595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.876784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.876816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.876935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.876967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.877105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.877137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.877380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.877414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.877626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.877656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.877897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.877927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.878134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.878164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.878328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.878363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.878509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.878541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.878661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.878693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.878921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.878953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.879133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.879164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.879375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.879409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.879652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.879683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.879808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.879840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.880106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.880138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.880258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.880291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.880396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.880428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.880548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.880579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.880707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.880740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.880855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.880893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.881132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.750 [2024-11-20 16:28:47.881165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.750 qpair failed and we were unable to recover it. 00:27:16.750 [2024-11-20 16:28:47.881371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.881404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.881676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.881708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.881972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.882004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.882131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.882163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.882341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.882373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.882596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.882629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.882757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.882788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.882919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.882952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.883123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.883154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.883354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.883387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.883494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.883525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.883701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.883732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.883925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.883957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.884089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.884120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.884319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.884351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.884533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.884566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.884754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.884785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.885051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.885083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.885254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.885287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.885529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.885561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.885778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.885809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.886049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.886080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.886210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.886243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.886447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.886479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.886662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.886693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.886952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.886984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.887179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.887229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.887362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.887394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.887582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.887613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.887897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.887929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.888045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.888077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.888323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.888356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.888528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.888559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.888687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.888718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.888823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.888855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.889139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.889171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.889367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.889401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.889572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.751 [2024-11-20 16:28:47.889603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.751 qpair failed and we were unable to recover it. 00:27:16.751 [2024-11-20 16:28:47.889839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.889870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.890047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.890079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.890322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.890354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.890481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.890512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.890615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.890646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.890818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.890849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.891018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.891049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.891235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.891267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.891453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.891485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.891617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.891649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.891823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.891854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.891972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.892003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.892120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.892152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.892346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.892379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.892618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.892650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.892850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.892883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.893009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.893041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.893296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.893329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.893502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.893535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.893643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.893674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.893856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.893888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.894111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.894144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.894369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.894401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.894538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.894571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.894782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.894813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.895092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.895124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.895362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.895395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.895522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.895554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.895817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.895860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.896101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.896133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.896341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.896374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.896637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.896669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.896845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.896877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.752 [2024-11-20 16:28:47.897139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.752 [2024-11-20 16:28:47.897171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.752 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.897378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.897412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.897526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.897558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.897766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.897797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.897986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.898019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.898121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.898152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.898400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.898432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.898551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.898583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.898826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.898858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.899047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.899079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.899323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.899356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.899554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.899586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.899698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.899729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.899862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.899894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.900109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.900141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.900281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.900314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.900437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.900470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.900643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.900674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.900843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.900876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.901056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.901087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.901288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.901322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.901516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.901548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.901733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.901770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.901953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.901984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.902175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.902212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.902451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.902483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.902657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.902688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.902927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.902958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.903168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.903198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.903467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.903499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.903687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.903718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.903839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.903870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.904047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.904079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.904272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.904306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.904416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.904447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.904576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.904606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.904800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.904833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.905021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.905053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.753 [2024-11-20 16:28:47.905248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.753 [2024-11-20 16:28:47.905282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.753 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.905389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.905420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.905550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.905582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.905763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.905794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.906036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.906067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.906276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.906309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.906499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.906530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.906659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.906689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.906823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.906854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.907047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.907077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.907336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.907369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.907483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.907515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.907737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.907770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.908026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.908057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.908179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.908232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.908501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.908533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.908705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.908736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.908976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.909008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.909128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.909159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.909415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.909448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.909583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.909615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.909867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.909899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.910142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.910174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.910372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.910404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.910584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.910615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.910746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.910778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.910954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.910985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.911251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.911285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.911410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.911442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.911571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.911602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.911814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.911846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.912049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.912081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.912271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.912304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.912490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.912522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.912766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.912797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.912932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.912965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.913144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.913175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.913362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.913394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.913587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.754 [2024-11-20 16:28:47.913618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.754 qpair failed and we were unable to recover it. 00:27:16.754 [2024-11-20 16:28:47.913829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.913862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.914045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.914076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.914257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.914290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.914483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.914515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.914699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.914730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.914835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.914867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.915050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.915083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.915195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.915239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.915454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.915487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.915604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.915636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.915766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.915798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.915928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.915960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.916083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.916115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.916323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.916362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.916550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.916581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.916820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.916851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.917040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.917072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.917257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.917289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.917476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.917508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.917692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.917723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.917982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.918014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.918130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.918161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.918287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.918320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.918512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.918544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.918674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.918705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.918855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.918886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.919075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.919107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.919297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.919332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.919512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.919543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.919815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.919847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.920096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.920128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.920312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.920345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.920583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.920615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.920796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.920829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.921101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.921132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.921258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.921292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.921548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.921580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.921832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.921863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.755 qpair failed and we were unable to recover it. 00:27:16.755 [2024-11-20 16:28:47.922037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.755 [2024-11-20 16:28:47.922069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.922307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.922341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.922483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.922521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.922645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.922677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.922794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.922826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.923092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.923123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.923370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.923403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.923602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.923633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.923898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.923929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.924117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.924148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.924401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.924434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.924621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.924653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.924788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.924820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.925063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.925095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.925304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.925338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.925468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.925499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.925697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.925730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.925994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.926026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.926238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.926271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.926460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.926492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.926736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.926771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.926951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.926995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.927218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.927265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.927560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.927596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.927703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.927735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.927924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.927955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.928128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.928159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.928357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.928391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.928597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.928629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.928817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.928858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.928978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.929013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.929224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.929259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.929467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.929513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.929727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.929763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.929959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.929992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.930191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.930238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.930373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.756 [2024-11-20 16:28:47.930406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.756 qpair failed and we were unable to recover it. 00:27:16.756 [2024-11-20 16:28:47.930592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.757 [2024-11-20 16:28:47.930624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.757 qpair failed and we were unable to recover it. 00:27:16.757 [2024-11-20 16:28:47.930863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.757 [2024-11-20 16:28:47.930895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.757 qpair failed and we were unable to recover it. 00:27:16.757 [2024-11-20 16:28:47.931186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.757 [2024-11-20 16:28:47.931257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.757 qpair failed and we were unable to recover it. 00:27:16.757 [2024-11-20 16:28:47.931449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.757 [2024-11-20 16:28:47.931494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.757 qpair failed and we were unable to recover it. 00:27:16.757 [2024-11-20 16:28:47.931700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.757 [2024-11-20 16:28:47.931746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.757 qpair failed and we were unable to recover it. 00:27:16.757 [2024-11-20 16:28:47.931870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.757 [2024-11-20 16:28:47.931904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:16.757 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.932043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.932076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.932315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.932349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.932466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.932498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.932687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.932718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.932895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.932927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.933044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.933074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.933260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.933293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.933463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.933492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.933613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.933643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.933812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.933843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.934026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.934056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.934186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.934225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.934346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.934377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.934588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.934618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.934820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.934852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.935034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.935066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.935235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.935266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.935460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.935491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.935627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.935657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.935851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.935883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.936143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.936174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.936444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.936478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.936663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.936695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.936977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 16:28:47.937009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 16:28:47.937223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.937257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.937380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.937411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.937559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.937590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.937782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.937814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.937995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.938028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.938241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.938274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.938448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.938479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.938657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.938690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.938832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.938863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.938988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.939021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.939136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.939168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.939360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.939393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.939661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.939693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.939957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.939988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.940195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.940235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.940429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.940461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.940719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.940751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.940943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.940976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.941164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.941195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.941470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.941501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.941689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.941721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.941912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.941946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.942188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.942229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.942474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.942507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.942705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.942738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.942914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.942946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.943092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.943127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.943321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.943363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.943557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.943590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.943716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.943749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.943934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.943971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.944155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.944186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.944331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.944364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.944567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.944599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.944740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.944772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.944889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.944921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.945116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.945147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.945345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 16:28:47.945380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 16:28:47.945676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.945707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.945924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.945958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.946224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.946258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.946449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.946482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.946659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.946691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.946952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.946986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.947130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.947164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.947307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.947340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.947543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.947577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.947761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.947793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.947976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.948009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.948196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.948239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.948353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.948385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.948586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.948619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.948753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.948784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.949023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.949058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.949187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.949229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.949471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.949502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.949689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.949721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.949918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.949956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.950135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.950167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.950372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.950406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.950601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.950633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.950806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.950839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.951029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.951063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.951241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.951276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.951522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.951554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.951746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.951778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.951951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.951985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.952177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.952219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.952409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.952443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.952569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.952603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.952735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.952767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.953002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.953033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.953220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.953253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.953378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.953411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.953662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.953695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.953960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 16:28:47.953994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 16:28:47.954185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.954228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.954465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.954497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.954639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.954673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.954859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.954892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.955092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.955123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.955412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.955448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.955627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.955660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.955846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.955878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.956067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.956100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.956279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.956312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.956573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.956606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.956794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.956828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.957085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.957118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.957339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.957375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.957572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.957606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.957778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.957811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.957965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.957999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.958172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.958215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.958390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.958424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.958703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.958735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.958909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.958943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.959183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.959245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.959431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.959464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.959578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.959610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.959796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.959828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.960024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.960058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.960235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.960268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.960442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.960475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.960699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.960732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.960916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.960949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.961222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.961255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.961493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.961526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.961710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.961742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.961950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.961984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.962167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.962199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.962394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.962429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.962675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.962708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 16:28:47.962826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 16:28:47.962859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.962976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.963008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.963237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.963271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.963380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.963411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.963680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.963714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.963891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.963926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.964051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.964083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.964342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.964377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.964570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.964603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.964781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.964814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.965005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.965038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.965223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.965259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.965377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.965414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.965555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.965587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.965695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.965727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.965965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.965999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.966118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.966149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.966430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.966465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.966595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.966629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.966743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.966775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.966959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.966996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.967193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.967234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.967354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.967388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.967566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.967599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.967837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.967870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.968073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.968106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.968301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.968335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.968521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.968554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.968743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.968775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.968992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.969026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.969134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.969167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.969284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.969318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.969443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.969474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.969649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 16:28:47.969683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 16:28:47.969858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.969892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.970015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.970048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.970220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.970254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.970495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.970528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.970638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.970671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.970863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.970900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.971081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.971113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.971282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.971317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.971502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.971534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.971720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.971753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.971926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.971958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.972161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.972196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.972404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.972437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.972555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.972588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.972764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.972796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.972979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.973012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.973192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.973243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.973350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.973393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.973656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.973691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.973912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.973949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.974225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.974263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.974393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.974427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.974623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.974659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.974854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.974890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.975029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.975063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.975262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.975301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.975486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.975521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.975674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.975711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.975882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.975917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.976165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.976214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.976357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.976394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.976527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.976565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.976759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.976801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.976983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.977017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.977233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.977268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.977514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.977547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 16:28:47.977807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 16:28:47.977840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.977974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.978006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.978179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.978220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.978327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.978358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.978484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.978516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.978776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.978808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.978983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.979016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.979146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.979179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.979317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.979353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.979477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.979508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.979701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.979734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.979992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.980024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.980218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.980253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.980381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.980415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.980588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.980621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.980908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.980941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.981067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.981099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.981290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.981325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.981446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.981480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.981669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.981703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.981854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.981886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.982148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.982181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.982380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.982413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.982653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.982685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.982951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.982983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.983119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.983154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.983344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.983377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.983595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.983627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.983744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.983777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.983893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.983926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.984114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.984147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.984401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.984433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.984607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.984638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.984761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.984792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.985006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.985038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.985228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.985262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.985502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.985536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.985741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.985777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.985955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 16:28:47.985986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 16:28:47.986173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.986215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.986323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.986356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.986531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.986563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.986752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.986785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.986977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.987010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.987196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.987252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.987385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.987418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.987669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.987705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.987908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.987941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.988060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.988092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.988268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.988302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.988474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.988507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.988714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.988748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.988939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.988985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.989184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.989228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.989437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.989474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.989614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.989645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.989845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.989895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.990109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.990149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.990288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.990324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.990526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.990561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.990689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.990723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.990925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.990958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.991138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.991172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.991296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.991331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.991594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.991637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.991810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.991843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.992038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.992073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.992270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.992306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.992487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.992519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.992632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.992668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.992777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.992812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.993008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.993045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.993231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.993271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.993445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.993479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.993654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.993687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.993901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 16:28:47.993935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 16:28:47.994105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.994138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.994286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.994322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.994523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.994558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.994681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.994714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.994908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.994942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.995129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.995164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.995296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.995332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.995521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.995556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.995666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.995699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.995890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.995926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.996049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.996082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.996268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.996303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.996425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.996459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.996730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.996764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.996954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.996986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.997118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.997160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.997350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.997385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.997626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.997660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.997905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.997938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.998129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.998162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.998418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.998454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.998647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.998681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.998785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.998818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.998939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.998972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.999176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.999238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.999437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.999470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.999599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.999632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:47.999884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:47.999919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:48.000112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:48.000145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:48.000287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:48.000323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:48.000565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:48.000599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:48.000843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:48.000876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:48.001048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:48.001083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:48.001350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:48.001386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:48.001659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:48.001692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:48.001871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:48.001904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:48.002095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:48.002128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:48.002262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 16:28:48.002296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 16:28:48.002416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.002450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.002644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.002679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.002881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.002915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.003127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.003160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.003301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.003338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.003537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.003570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.003683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.003716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.003823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.003856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.004047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.004082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.004211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.004247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.004421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.004455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.004702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.004737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.004931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.004965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.005087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.005119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.005291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.005326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.005569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.005603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.005793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.005830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.005986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.006030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.006237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.006272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.006399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.006434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.006610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.006647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.006858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.006893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.007105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.007140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.007315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.007353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.007527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.007563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.007698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.007738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.007856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.007889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.008099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.008136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.008326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.008362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.008627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.008663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.008860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.008900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.009038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.009074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.009197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.009244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.009434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 16:28:48.009470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 16:28:48.009644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.009677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.009870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.009904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.010102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.010135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.010264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.010299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.010485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.010518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.010835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.010905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.011108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.011145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.011452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.011488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.011759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.011792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.012058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.012090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.012277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.012311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.012507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.012540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.012711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.012743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.012956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.012990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.013179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.013219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.013394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.013427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.013615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.013650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.013834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.013868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.014050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.014085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.014211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.014245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.014512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.014546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.014681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.014714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.014895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.014928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.015055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.015087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.015227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.015267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.015404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.015438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.015557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.015594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.015841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.015874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.016118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.016152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.016357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.016391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.016589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.016623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.016791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.016823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.017038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.017071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.017222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.017257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.017400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.017432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.017635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.017669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.017801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.017835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.018018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 16:28:48.018050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 16:28:48.018320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.018356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.018560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.018593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.018729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.018762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.018949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.018983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.019237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.019271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.019510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.019543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.019736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.019768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.019966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.020000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.020126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.020158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.020369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.020403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.020616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.020649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.020822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.020854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.020990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.021022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.021269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.021304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.021493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.021526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.021710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.021743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.021849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.021881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.022157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.022190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.022447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.022481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.022608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.022642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.022830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.022863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.023103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.023136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.023338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.023372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.023494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.023528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.023712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.023744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.023973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.024004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.024200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.024243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.024375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.024414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.024532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.024564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.024762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.024797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.025018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.025052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.025175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.025216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.025331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.025371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.025581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.025615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.025744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.025775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.025967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.026000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.026179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.026219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 16:28:48.026336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 16:28:48.026368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.026555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.026589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.026801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.026835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.026948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.026980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.027180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.027225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.027356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.027390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.027632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.027663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.027835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.027867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.028065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.028098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.028277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.028313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.028552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.028584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.028696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.028728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.028934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.028966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.029134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.029166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.029346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.029378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.029557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.029589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.029825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.029864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.030070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.030102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.030229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.030265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.030400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.030432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.030604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.030635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.030822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.030855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.030974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.031006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.031184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.031242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.031353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.031384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.031502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.031534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.031779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.031810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.032083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.032116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.032239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.032273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.032459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.032490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.032607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.032640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.032880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.032913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.033104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.033135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.033314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.033348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.033548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.033582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.033698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.033732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.033936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.033968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.034142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.034182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 16:28:48.034319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 16:28:48.034352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.034482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.034516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.034651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.034682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.034813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.034845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.035089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.035121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.035300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.035335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.035574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.035607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.035723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.035755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.035925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.035957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.036195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.036241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.036485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.036519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.036711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.036744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.036917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.036950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.037088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.037122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.037359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.037393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.037570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.037602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.037784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.037818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.037941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.037974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.038226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.038267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.038394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.038426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.038664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.038696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.038888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.038921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.039107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.039138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.039329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.039364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.039496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.039532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.039662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.039694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.039814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.039846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.040032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.040065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.040239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.040274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.040521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.040554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.040753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.040785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.040966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.040998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.041262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.041298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.041497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.041530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.041656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.041690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.041930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.041964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.042084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.042115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.042292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.042326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 16:28:48.042501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 16:28:48.042533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.042662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.042694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.042945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.042977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.043092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.043131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.043343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.043376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.043504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.043536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.043654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.043686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.043807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.043841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.043953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.043987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.044184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.044227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.044409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.044440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.044683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.044716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.044929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.044961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.045227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.045261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.045388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.045420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.045525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.045558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.045680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.045711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.045895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.045928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.046156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.046189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.046379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.046412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.046540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.046579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.046765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.046799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.047012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.047044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.047241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.047274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.047516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.047550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.047729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.047760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.047884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.047918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.048104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.048136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.048322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.048355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.048477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.048508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.048682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.048715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.048904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.048936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.049103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.049136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 16:28:48.049379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 16:28:48.049415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.049625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.049660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.049796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.049830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.049942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.049976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.050232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.050266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.050388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.050422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.050545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.050576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.050709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.050742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.051025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.051059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.051180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.051236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.051424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.051456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.051657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.051691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.051875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.051907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.052167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.052199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.052405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.052438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.052572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.052604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.052730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.052763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.052948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.052981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.053148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.053181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.053331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.053365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.053619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.053652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.053773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.053806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.053996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.054028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.054131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.054165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.054298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.054332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.054594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.054629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.054807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.054850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.054956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.054994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.055123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.055155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.055381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.055414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.055630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.055663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.055789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.055820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.055935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.055967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.056082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.056113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.056238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.056274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.056408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.056440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.056637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.056670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.056796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.056829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 16:28:48.057006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 16:28:48.057038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.057155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.057187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.057386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.057419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.057614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.057647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.057824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.057856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.057974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.058009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.058194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.058236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.058530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.058562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.058746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.058780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.058963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.058995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.059178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.059217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.059391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.059423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.059633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.059665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.059863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.059894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.060010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.060041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.060215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.060249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.060432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.060464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.060574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.060606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.060781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.060812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.061050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.061082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.061342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.061378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.061573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.061605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.061809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.061842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.062032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.062064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.062315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.062348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.062615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.062648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.062843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.062875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.063182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.063238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.063410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.063442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.063684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.063723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.063917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.063949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.064146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.064180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.064455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.064489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.064725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.064757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.064941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.064973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.065169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.065211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.065383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.065415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.065533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.065564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 16:28:48.065667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 16:28:48.065699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.065964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.065995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.066257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.066290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.066555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.066588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.066877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.066909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.067040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.067073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.067192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.067235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.067440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.067473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.067598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.067630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.067808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.067840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.068020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.068052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.068183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.068224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.068412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.068444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.068615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.068647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.068830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.068863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.069038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.069070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.069259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.069293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.069508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.069540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.069730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.069763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.069950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.069983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.070093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.070125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.070317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.070350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.070463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.070495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.070687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.070719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.070912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.070944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.071061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.071093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.071264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.071297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.071564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.071596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.071781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.071814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.072005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.072038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.072290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.072323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.072511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.072550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.072679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.072711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.072897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.072930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.073102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.073134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.073372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.073405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.073565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.073597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.073784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 16:28:48.073816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 16:28:48.074077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.074110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.074223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.074269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.074447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.074479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.074594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.074626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.074747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.074780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.074993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.075026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.075156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.075189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.075399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.075432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.075613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.075645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.075815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.075847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.076033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.076066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.076238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.076271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.076446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.076478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.076600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.076633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.076831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.076864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.077056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.077089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.077266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.077298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.077494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.077526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.077643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.077675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.077862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.077894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.078041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.078074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.078249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.078283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.078467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.078499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.078617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.078649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.078793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.078825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.078938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.078970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.079135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.079167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.079303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.079336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.079604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.079636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.079781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.079813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.080072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.080105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.080279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.080312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.080552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.080585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 16:28:48.080833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 16:28:48.080873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.080989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.081021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.081231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.081264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.081543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.081576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.081703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.081734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.081903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.081936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.082055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.082087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.082268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.082301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.082514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.082546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.082737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.082770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.082973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.083006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.083199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.083262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.083448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.083481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.083667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.083699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.083903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.083935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.084136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.084168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.084304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.084337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.084507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.084539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.084645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.084677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.084822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.084854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.084964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.084996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.085122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.085154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.085278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.085311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.085426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.085458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.085706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.085738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.085852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.085885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.086053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.086085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.086223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.086258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.086361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.086394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.086600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.086632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.086804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.086836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.087018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.087050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.087238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.087271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.087396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.087429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.087694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.087726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.087843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.087874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.087992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.088024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.088218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.088251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 16:28:48.088489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 16:28:48.088521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.088723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.088755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.088856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.088895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.089107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.089140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.089330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.089363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.089494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.089525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.089741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.089773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.090013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.090045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.090184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.090224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.090354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.090387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.090576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.090608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.090800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.090832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.091033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.091065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.091262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.091296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.091482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.091515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.091641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.091673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.091948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.091981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.092198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.092240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.092412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.092444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.092585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.092618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.092735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.092767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.092948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.092981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.093248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.093282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.093414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.093447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.093650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.093682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.093881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.093913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.094088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.094120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.094323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.094357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.094545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.094578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.094759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.094831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.095055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.095091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.095231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.095267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.095384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.095417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.095522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.095554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.095830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.095862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.096028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.096060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.096263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.096297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.096536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 16:28:48.096569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 16:28:48.096818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.096851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.097068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.097100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.097295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.097329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.097531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.097563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.097681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.097713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.097897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.097929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.098194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.098236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.098480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.098513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.098756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.098788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.098943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.098974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.099106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.099139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.099332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.099364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.099561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.099593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.099725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.099756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.099935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.099967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.100157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.100189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.100337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.100371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.100562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.100594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.100744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.100782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.101045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.101078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.101268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.101301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.101490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.101521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.101733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.101766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.101898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.101929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.102117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.102149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.102344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.102377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.102507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.102538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.102664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.102696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.102881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.102912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.103115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.103147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.103410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.103443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.103737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.103769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.103962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.103994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.104145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.104177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.104381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.104413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.104652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.104685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.104926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.104958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.105213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.105246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 16:28:48.105361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 16:28:48.105393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.105681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.105713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.106001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.106033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.106297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.106330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.106455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.106487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.106692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.106724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.106917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.106949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.107218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.107256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.107535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.107568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.107821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.107854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.108040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.108072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.108251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.108285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.108556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.108588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.108758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.108790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.108964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.108995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.109168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.109200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.109467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.109499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.109689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.109721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.109980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.110013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.110320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.110354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.110545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.110577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.110766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.110799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.110991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.111024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.111198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.111251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.111433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.111465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.111579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.111611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.111889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.111920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.112055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.112088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.112265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.112298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.112474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.112505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.112699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.112731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.112934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.112965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.113149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.113181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.113360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 16:28:48.113393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 16:28:48.113511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.113549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.113801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.113833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.113967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.114000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.114262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.114296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.114481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.114512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.114746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.114779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.114956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.114987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.115109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.115141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.115330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.115364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.115558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.115591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.115886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.115918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.116092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.116124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.116246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.116279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.116394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.116426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.116649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.116682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.116872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.116903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.117165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.117198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.117332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.117364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.117553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.117585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.117782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.117813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.117936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.117968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.118213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.118245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.118375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.118406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.118669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.118702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.118824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.118855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.118972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.119004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.119176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.119216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.119356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.119388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.119584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.119616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.119878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.119911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.120149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.120181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.120367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 16:28:48.120399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 16:28:48.120628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.120659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.120777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.120809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.120984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.121015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.121197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.121239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.121412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.121445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.121688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.121720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.121921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.121953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.122214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.122248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.122436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.122467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.122736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.122767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.122952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.122984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.123162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.123194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.123457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.123489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.123758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.123789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.123975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.124007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.124121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.124152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.124277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.124310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.124498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.124529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.124780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.124811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.125095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.125127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.125353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.125386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.125631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.125663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.125848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.125880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.126017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.126049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.126261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.126295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.126481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.126512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.126718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.126750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.126869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.126901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.127116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.127147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.127342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.127376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.127504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.127536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.127794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.127826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.128011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.128043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.128228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.128262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.128384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.128415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.128608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.128641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.128820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.128857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.129071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 16:28:48.129103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 16:28:48.129299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.129332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.129450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.129481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.129728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.129760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.129998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.130030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.130143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.130175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.130364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.130396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.130579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.130612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.130788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.130819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.130999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.131032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.131169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.131221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.131411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.131443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.131619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.131650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.131829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.131861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.132046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.132078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.132365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.132398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.132525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.132556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.132793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.132824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.132953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.132984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.133088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.133120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.133298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.133332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.133598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.133629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.133816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.133848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.134015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.134046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.134313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.134346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.134536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.134568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.134742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.134779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.134906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.134938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.135114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.135145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.135342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.135376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.135655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.135688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.135861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.135892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.136015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.136047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.136225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.136259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.136448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.136480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.136677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.136708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.136893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.136925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.137136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.137168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.137414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 16:28:48.137447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 16:28:48.137623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.137655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.137788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.137820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.137991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.138023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.138196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.138236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.138380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.138412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.138545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.138578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.138764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.138795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.138915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.138948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.139065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.139096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.139264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.139296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.139423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.139455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.139646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.139678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.139874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.139906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.140121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.140152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.140359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.140397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.140518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.140550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.140730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.140761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.140997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.141028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.141198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.141240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.141506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.141538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.141722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.141754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.141952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.141983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.142169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.142212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.142347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.142378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.142637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.142670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.142851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.142883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.143021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.143053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.143262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.143295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.143509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.143542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.143737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.143769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.143953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.143985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.144177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.144217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.144392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.144424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.144634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.144665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.144797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.144828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.145016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.145047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.145173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.145211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.145392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 16:28:48.145423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 16:28:48.145597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.145630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.145812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.145844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.146020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.146052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.146286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.146318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.146457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.146489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.146663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.146694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.146810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.146841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.146969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.147002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.147235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.147268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.147451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.147483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.147669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.147701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.147875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.147907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.148222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.148255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.148436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.148467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.148583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.148615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.148734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.148767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.148879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.148910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.149101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.149139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.149348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.149382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.149567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.149599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.149724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.149755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.149951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.149983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.150193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.150236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.150410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.150441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.150679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.150711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.150965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.150997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.151189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.151243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.151419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.151450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.151695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.151727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.151986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.152018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.152193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.152238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.152431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.152463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.152579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.152611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.152725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.152757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.152870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.152902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.153139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.153171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.153431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.153465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.153661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 16:28:48.153694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 16:28:48.153810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.153841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.154108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.154140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.154331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.154363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.154607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.154639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.154903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.154935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.155116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.155147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.155278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.155326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.155464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.155496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.155602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.155634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.155823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.155854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.155986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.156018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.156197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.156241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.156480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.156511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.156717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.156750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.156928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.156959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.157145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.157177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.157323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.157356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.157528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.157560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.157737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.157768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.157955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.157988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.158179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.158219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.158432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.158463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.158729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.158761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.158881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.158913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.159231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.159264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.159550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.159582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.159780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.159813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.160015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.160046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.160225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.160259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.160534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.160566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.160699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.160731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.160920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.160951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.161129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.161160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.161355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 16:28:48.161399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 16:28:48.161643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.161675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.161789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.161821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.162058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.162090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.162220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.162253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.162434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.162466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.162638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.162669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.162843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.162876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.163061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.163093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.163219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.163252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.163475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.163507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.163701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.163732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.163918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.163950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.164140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.164172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.164409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.164441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.164623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.164655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.164763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.164795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.165052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.165084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.165317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.165350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.165475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.165508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.165778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.165809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.165996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.166028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.166291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.166325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.166528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.166559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.166750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.166783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.166916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.166948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.167073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.167105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.167236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.167269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.167460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.167493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.167623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.167655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.167789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.167821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.168098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.168130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.168251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.168285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.168400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 16:28:48.168431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 16:28:48.168584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.168616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.168751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.168783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.169042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.169073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.169310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.169344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.169610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.169641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.169877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.169909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.170097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.170129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.170323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.170357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.170603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.170634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.170832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.170864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.171059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.171090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.171222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.171256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.171384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.171416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.171589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.171620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.171738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.171770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.172004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.172036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.172233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.172266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.172585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.172618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.172755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.172787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.173023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.173054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.173249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.173282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.173477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.173510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.173699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.173731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.173849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.173881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.174067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.174100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.174331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.174365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.174553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.174585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.174773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.174806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.174996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.175027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.175152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.175185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.175459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.175493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.175669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.175700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.175827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.175860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.176037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.176069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.176192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.176240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.176424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.176455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.176637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.176669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.176805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 16:28:48.176836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 16:28:48.177073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.177105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.177344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.177377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.177516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.177548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.177730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.177762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.177896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.177927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.178217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.178251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.178432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.178463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.178654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.178686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.178879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.178911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.179082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.179113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.179357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.179391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.179625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.179657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.179846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.179878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.180026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.180057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.180239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.180272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.180516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.180548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.180722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.180753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.180924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.180956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.181148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.181180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.181374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.181407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.181514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.181546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.181735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.181767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.182035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.182066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.182268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.182307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.182491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.182523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.182710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.182743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.182933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.182964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.183146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.183178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.183292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.183325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.183432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.183464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.183669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.183700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.183823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.183855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.184048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.184080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.184289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.184326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.184516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.184547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.184724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.184756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.184996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.185028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.185163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.185196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 16:28:48.185473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 16:28:48.185505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.185700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.185731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.185921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.185954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.186143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.186176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.186431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.186463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.186638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.186671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.186847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.186879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.187071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.187103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.187371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.187405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.187690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.187722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.187862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.187893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.188089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.188121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.188315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.188354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.188533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.188565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.188749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.188780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.188959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.188990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.189098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.189130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.189432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.189465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.189677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.189709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.189968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.190001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.190119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.190150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.190417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.190450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.190712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.190744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.190877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.190908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.191060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.191092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.191281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.191314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.191562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.191594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.191781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.191813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.192051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.192083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.192274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.192307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.192421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.192452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.192558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.192591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.192710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.192743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 16:28:48.192847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 16:28:48.192878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.193073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.193106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.193295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.193329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.193512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.193544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.193652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.193683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.193917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.193950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.194145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.194178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.194457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.194490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.194679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.194712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.194887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.194921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.195036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.195069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.195336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.195370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.195550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.195581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.195685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.195718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.195841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.195872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.196059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.196091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.196353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.196387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.196593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.196624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.196808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.196841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.196965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.196996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.197188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.197233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.197499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.197532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.197645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.197678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.197803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.197835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.198096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.198130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.198315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.198349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.198488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.198521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.198796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.198829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.199038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.199070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.199255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.199288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.199476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.199509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.199689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.199721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.199931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.199964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.200144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.200178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.200460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.200494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.200627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.200659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.200900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.200933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.201117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.201148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-11-20 16:28:48.201335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-11-20 16:28:48.201368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.201495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.201527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.201788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.201821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.201948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.201980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.202164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.202198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.202346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.202378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.202485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.202517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.202779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.202813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.202946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.202979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.203101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.203138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.203324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.203358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.203596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.203629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.203893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.203926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.204058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.204091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.204270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.204306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.204498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.204530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.204649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.204682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.204884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.204916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.205185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.205226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.205427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.205459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.205646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.205679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.205946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.205977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.206222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.206257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.206443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.206485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.206601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.206634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.206764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.206796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.207035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.207067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.207248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.207281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.207523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.207555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.207676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.207708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.207890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.207923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.208055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.208087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.208269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.208303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.208435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.208467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.208576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.208609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.208846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.208878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.209073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.209112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.209221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.209254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-11-20 16:28:48.209448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-11-20 16:28:48.209480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.209692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.209723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.209852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.209884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.209999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.210031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.210215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.210250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.210485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.210518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.210717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.210751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.210956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.210989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.211182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.211224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.211466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.211500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.211673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.211705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.211883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.211916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.212109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.212143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.212275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.212309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.212497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.212531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.212649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.212681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.212888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.212921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.213040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.213072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.213178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.213219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.213395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.213426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.213557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.213591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.213710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.213742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.213867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.213899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.214100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.214133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.214332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.214367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.214557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.214588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.214717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.214751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.214922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.214955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.215066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.215097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.215245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.215281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.215477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.215510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.215617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.215650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.215777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.215809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.216074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.216107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.216244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.216280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.216456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.216488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.216595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.216628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.216811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.216844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.217038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-11-20 16:28:48.217070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-11-20 16:28:48.217181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.217224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.217484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.217516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.217625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.217657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.217915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.217946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.218217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.218250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.218451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.218482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.218739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.218771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.219012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.219045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.219170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.219211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.219398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.219430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.219569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.219602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.219846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.219879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.220079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.220111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.220310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.220344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.220540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.220571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.220690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.220723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.220830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.220861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.221098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.221130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.221251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.221285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.221394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.221426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.221619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.221651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.221828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.221860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.222039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.222073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.222313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.222347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.222542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.222574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.222702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.222734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.222972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.223005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.223136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.223175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.223435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.223468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.223648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.223682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.223799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.223831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.224003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.224036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.224321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.224356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.224543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.224575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.224712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.224744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.224952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.224985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.225160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.225193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.225337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-11-20 16:28:48.225370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-11-20 16:28:48.225616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.225649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.225861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.225893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.226010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.226042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.226169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.226210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.226403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.226435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.226614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.226648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.226889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.226922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.227162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.227196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.227388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.227422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.227614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.227647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.227819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.227851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.228072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.228103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.228232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.228267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.228528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.228559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.228799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.228832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.228962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.228995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.229173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.229219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.229324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.229355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.229486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.229518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.229653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.229685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.229883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.229916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.230036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.230068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.230238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.230272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.230477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.230509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.230747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.230779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.231039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.231073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.231298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.231332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.231459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.231492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-11-20 16:28:48.231627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-11-20 16:28:48.231661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.231906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.231938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.232126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.232160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.232361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.232395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.232575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.232607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.232817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.232851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.232971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.233002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.233183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.233228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.233438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.233470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.233665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.233697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.233814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.233847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.234031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.234064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.234188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.234231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.234500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.234533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.234667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.234699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.234918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.234950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.235149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.235182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.235315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.235347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.235609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.235643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.235887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.235920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.236026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.236058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.236175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.236218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.236405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.236436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.236559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.236592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.236860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.236898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.237119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.237156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.237363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.237407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.237675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.237710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.237848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.237886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.238138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.238172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.238374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.238410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.238674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.238708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.238817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.238854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.239052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.239085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.239237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.239273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.239453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.239498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.239775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.239810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.239942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.239975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-11-20 16:28:48.240184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-11-20 16:28:48.240234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.240500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.240534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.240665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.240696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.240908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.240945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.241083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.241116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.241239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.241285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.241419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.241454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.241724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.241763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.242054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.242105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.242378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.242421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.242550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.242585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.242722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.242754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.242856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.242890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.243089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.243121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.243322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.243357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.243463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.243496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.243632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.243666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.243854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.243890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.244090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.244148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.244402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.244442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.244627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.244662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.244784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.244816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.244942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.244977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.245105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.245140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.245324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.245359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.245574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.245609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.245795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.245831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.246030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.246078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-11-20 16:28:48.246302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-11-20 16:28:48.246340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 16:28:48.246521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 16:28:48.246555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 16:28:48.246834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 16:28:48.246867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 16:28:48.247072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 16:28:48.247105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 16:28:48.247301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 16:28:48.247336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 16:28:48.247629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 16:28:48.247662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 16:28:48.247840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 16:28:48.247872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 16:28:48.247989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 16:28:48.248022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 16:28:48.248138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 16:28:48.248171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 16:28:48.248303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 16:28:48.248335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 16:28:48.248529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 16:28:48.248562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 16:28:48.248701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 16:28:48.248733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 16:28:48.248838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 16:28:48.248871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 16:28:48.249060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.249092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.249280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.249314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.249555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.249586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.249783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.249814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.249987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.250030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.250165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.250199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.250346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.250379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.250509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.250541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.250735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.250766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.250957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.250990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.251162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.251193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.251385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.251416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.251591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.251623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.251740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.251771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.252008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.252040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.252262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.252298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.252425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.252457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.252576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.252609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.252757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.252788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.252986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.253020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.253139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.253174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.253317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.253350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.253532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.253565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.253684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.253719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.253844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.253877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.254062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.254095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.254283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.254321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.254444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.254478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.254654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.254686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.254925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.254965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.255085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.255128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.255245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.255286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.255419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.255453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.255640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.255673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.255837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.255870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.255978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.256012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.256133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.256167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.256370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 16:28:48.256406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 16:28:48.256523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.256556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.256674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.256708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.256904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.256939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.257117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.257150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.257334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.257369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.257482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.257514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.257642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.257676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.257888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.257922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.258098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.258131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.258264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.258299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.258496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.258530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.258711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.258744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.258943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.258976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.259223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.259257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.259444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.259478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.259606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.259640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.259758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.259790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.260015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.260048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.260169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.260210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.260326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.260359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.260463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.260497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.260619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.260652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.260766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.260799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.260969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.261002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.261122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.261156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.261374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.261409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.261537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.261572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.261686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.261718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.261909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.261943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.262045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.262080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.262195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.262241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.262366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.262400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.262515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.262550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.262749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.262782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.262961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148faf0 is same with the state(6) to be set 00:27:17.364 [2024-11-20 16:28:48.263300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.263372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.263510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.263545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.263674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.263707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 16:28:48.263897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 16:28:48.263929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.264228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.264262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.264430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.264464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.264571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.264602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.264845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.264879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.264996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.265031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.265223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.265258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.265498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.265532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.265710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.265742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.265985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.266020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.266221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.266257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.266516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.266548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.266741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.266773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.267033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.267067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.267263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.267299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.267565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.267597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.267787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.267822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.268006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.268039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.268251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.268285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.268483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.268515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.268625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.268657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.268771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.268803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.269051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.269084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.269258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.269298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.269547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.269579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.269701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.269733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.269907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.269941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.270116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.270148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.270342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.270376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.270483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.270515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.270707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.270739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.270919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.270953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.271222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.271257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.271450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.271482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.271666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.271701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.271807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.271838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.271971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.272005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 16:28:48.272136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 16:28:48.272170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.272363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.272396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.272593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.272627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.272816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.272848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.273035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.273068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.273177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.273223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.273397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.273432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.273538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.273570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.273752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.273785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.273905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.273937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.274103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.274135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.274247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.274280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.274467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.274500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.274684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.274716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.274888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.274919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.275044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.275075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.275195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.275238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.275375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.275406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.275698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.275730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.275849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.275881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.275996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.276028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.276152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.276192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.276442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.276475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.276648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.276680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.276875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.276908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.277043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.277078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.277255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.277294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.277488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.277522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.277658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.277691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.277890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.277922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.278042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.278073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.278189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.278230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.278337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.278369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.278565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.278596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.278766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.278796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.278920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.278955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.279162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.279193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.279324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 16:28:48.279357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 16:28:48.279548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.279581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.279769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.279800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.279921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.279953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.280076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.280107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.280241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.280275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.280456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.280488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.280599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.280630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.280804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.280836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.281014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.281045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.281148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.281180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.281435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.281469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.281635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.281670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.281938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.281970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.282156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.282190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.282340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.282372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.282586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.282624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.282745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.282778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.282919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.282952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.283078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.283110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.283232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.283266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.283447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.283480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.283580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.283613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.283724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.283756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.283998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.284032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.284228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.284261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.284501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.284534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.284662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.284696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.284892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.284924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.285049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.285082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.285233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.285268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.285387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.285422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 16:28:48.285613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 16:28:48.285646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.285775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.285808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.285941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.285978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.286087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.286120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.286246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.286279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.286383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.286415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.286537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.286570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.286752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.286785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.286916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.286948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.287078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.287111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.287238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.287272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.287391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.287427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.287538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.287570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.287690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.287723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.287912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.287944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.288083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.288115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.288239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.288273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.288379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.288412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.288520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.288552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.288661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.288693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.288823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.288855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.289041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.289074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.289194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.289235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.289344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.289377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.289589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.289627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.289735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.289768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.289873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.289905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.290094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.290127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.290248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.290280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.290468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.290501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.290642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.290674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.290785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.290817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.291019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.291052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.291254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.291288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.291416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.291448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.291571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.291604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.291743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.291777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.291957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 16:28:48.291991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 16:28:48.292114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.292146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.292336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.292370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.292561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.292594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.292710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.292742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.292863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.292897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.293069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.293104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.293226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.293259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.293385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.293416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.293611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.293644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.293820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.293852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.293972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.294004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.294116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.294147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.294339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.294374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.294503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.294541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.294658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.294690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.294821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.294853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.294965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.294996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.295115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.295146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.295341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.295375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.295506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.295538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.295648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.295680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.295792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.295823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.295942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.295974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.296083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.296115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.296253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.296287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.296395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.296428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.296605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.296637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.296856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.296892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.297001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.297032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.297148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.297177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.297296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.297328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.297446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.297476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.297624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.297655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.297844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.297876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.297979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.298011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.298148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.298180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.298309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.298341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 16:28:48.298533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 16:28:48.298566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.298713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.298744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.298918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.298950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.299140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.299173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.299286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.299318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.299441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.299474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.299650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.299683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.299813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.299844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.299977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.300009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.300186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.300233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.300347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.300379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.300503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.300536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.300671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.300703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.300807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.300839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.301029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.301062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.301236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.301270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.301457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.301495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.301656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.301688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.301793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.301824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.301928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.301961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.302081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.302116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.302223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.302256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.302368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.302399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.302505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.302537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.302711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.302744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.302885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.302918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.303102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.303134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.303253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.303288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.303396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.303429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.303602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.303633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.303815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.303847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.303968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.304000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.304165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.304197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.304322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.304354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.304556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.304588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.304765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.304798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.304911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.304943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.305060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 16:28:48.305092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 16:28:48.305224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.305259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.305373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.305405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.305509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.305542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.305654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.305685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.305872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.305905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.306107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.306140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.306339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.306372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.306560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.306593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.306776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.306809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.306943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.306974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.307116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.307149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.307281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.307314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.307428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.307460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.307563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.307596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.307703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.307734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.307838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.307870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.308058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.308090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.308196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.308262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.308427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.308467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.308641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.308673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.308841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.308874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.308995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.309027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.309222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.309256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.309366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.309399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.309659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.309692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.309866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.309898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.310068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.310100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.310222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.310257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.310457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.310490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.310615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.310648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.310832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.310864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.310973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.311005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.311185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 16:28:48.311240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 16:28:48.311347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.311387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.311497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.311529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.311633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.311664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.311872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.311906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.312105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.312137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.312336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.312370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.312551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.312582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.312702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.312734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.312904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.312936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.313121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.313154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.313281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.313314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.313531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.313564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.313759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.313792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.313966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.314000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.314184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.314224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.314400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.314433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.314614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.314646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.314754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.314786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.314922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.314954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.315141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.315173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.315370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.315403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.315573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.315605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.315706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.315738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.315975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.316007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.316179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.316235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.316414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.316452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.316562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.316594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.316789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.316820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.316934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.316965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.317097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.317129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.317319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.317352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.317534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.317567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.317745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.317778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.317918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.317951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.318090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.318122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.318228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.318261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.318446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.318478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 16:28:48.318654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 16:28:48.318686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.318797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.318829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.319012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.319043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.319241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.319275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.319379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.319411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.319520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.319552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.319735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.319767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.319939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.319972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.320095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.320126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.320415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.320449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.320639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.320671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.320862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.320894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.321028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.321059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.321180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.321222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.321356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.321388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.321549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.321580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.321689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.321721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.321906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.321939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.322128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.322159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.322287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.322320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.322562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.322593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.322766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.322799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.322904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.322935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.323108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.323140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.323281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.323316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.323500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.323532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.323718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.323751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.323861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.323894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.324008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.324045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.324250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.324282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.324468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.324500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.324673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.324704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.324893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.324926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.325041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.325072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.325242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.325275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.325473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.325505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.325691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.325723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.325915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 16:28:48.325947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 16:28:48.326176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.326217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.326405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.326437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.326625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.326657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.326948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.326981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.327170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.327214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.327342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.327374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.327484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.327515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.327705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.327737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.327926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.327958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.328077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.328109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.328308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.328343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.328455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.328487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.328599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.328632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.328815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.328846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.329027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.329059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.329192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.329235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.329354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.329387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.329521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.329553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.329724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.329756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.329869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.329901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.330080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.330113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.330234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.330267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.330444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.330475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.330584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.330616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.330791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.330823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.331000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.331032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.331164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.331197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.331400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.331433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.331689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.331721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.331896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.331929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.332045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.332081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.332396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.332430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.332533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.332564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.332745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.332778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.332883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.332914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.333084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.333116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.333338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.333372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.333474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 16:28:48.333505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 16:28:48.333618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.333650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.333835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.333868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.333973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.334004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.334127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.334159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.334354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.334386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.334579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.334612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.334882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.334915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.335106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.335139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.335380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.335413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.335700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.335734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.335915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.335948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.336187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.336229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.336365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.336397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.336592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.336625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.336751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.336782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.336963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.336996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.337178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.337222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.337341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.337373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.337480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.337512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.337708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.337741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.337927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.337959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.338138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.338170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.338448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.338482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.338672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.338705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.338830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.338862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.339048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.339080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.339217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.339251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.339383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.339415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.339524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.339555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.339854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.339886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.340063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.340096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.340231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.340264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.340447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.340485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.340697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.340729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.340910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.340942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.341112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.341145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.341326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.341366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 16:28:48.341482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 16:28:48.341515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.341634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.341667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.341775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.341806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.341921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.341953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.342067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.342099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.342367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.342401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.342577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.342609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.342715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.342747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.342938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.342970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.343232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.343265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.343382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.343413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.343582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.343614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.343725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.343756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.343929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.343961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.344068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.344099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.344270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.344304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.344475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.344507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.344618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.344649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.344859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.344892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.345025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.345057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.345257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.345290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.345461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.345492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.345630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.345662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.345864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.345896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.346037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.346070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.346337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.346371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.346500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.346532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.346656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.346688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.346877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.346909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.347030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.347062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.347245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.347279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.347451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.347483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.347590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.347622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 16:28:48.347757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 16:28:48.347788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.347901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.347934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.348152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.348190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.348343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.348375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.348570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.348601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.348708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.348741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.348850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.348881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.348999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.349031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.349140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.349172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.349311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.349343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.349529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.349560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.349664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.349695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.349869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.349902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.350079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.350112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.350298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.350332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.350524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.350557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.350688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.350720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.350841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.350873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.350995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.351026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.351143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.351176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.351291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.351323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.351430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.351463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.351634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.351666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.351846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.351878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.352004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.352036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.352216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.352250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.352363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.352394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.352497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.352530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.352637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.352669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.352959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.353031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.353174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.353225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.353409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.353441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.353625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.353657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.353881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.353914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.354084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.354115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.354303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.354337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.354578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.354610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.354736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 16:28:48.354768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 16:28:48.354887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.354919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.355035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.355066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.355192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.355235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.355349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.355381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.355501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.355541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.355643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.355675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.355880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.355912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.356015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.356046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.356247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.356281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.356464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.356495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.356611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.356643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.356829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.356861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.357036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.357069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.357200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.357258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.357386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.357418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.357534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.357566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.357676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.357707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.357884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.357916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.358056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.358088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.358277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.358311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.358439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.358470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.358600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.358632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.358748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.358780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.358902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.358935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.359044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.359076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.359319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.359353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.359462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.359494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.359668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.359699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.359896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.359927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.360102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.360135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.360260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.360293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.360480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.360551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.360768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.360805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.361004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.361038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.361156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.361188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.361387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.361420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.361540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.361572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.361750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 16:28:48.361783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 16:28:48.361955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.361986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.362163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.362196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.362324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.362356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.362538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.362569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.362751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.362783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.362887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.362919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.363025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.363057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.363179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.363226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.363336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.363367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.363482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.363515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.363650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.363682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.363800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.363833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.364063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.364095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.364278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.364311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.364450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.364483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.364605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.364637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.364741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.364773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.364895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.364927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.365038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.365070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.365189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.365230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.365368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.365407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.365650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.365680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.365880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.365913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.366100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.366132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.366275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.366308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.366491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.366522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.366713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.366744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.366856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.366888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.367072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.367104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.367226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.367258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.367436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.367467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.367663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.367695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.367815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.367847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.367984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.368016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.368140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.368172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.368364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.368400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.368590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.368622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.368805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 16:28:48.368836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 16:28:48.368964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.368996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.369115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.369146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.369265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.369299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.369435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.369467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.369764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.369796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.369925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.369956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.370070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.370103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.370215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.370248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.370360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.370391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.370580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.370617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.370728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.370760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.371009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.371041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.371237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.371271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.371445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.371477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.371592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.371624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.371823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.371855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.371981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.372013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.372131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.372162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.372286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.372318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.372490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.372522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.372625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.372657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.372775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.372806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.372913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.372951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.373072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.373104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.373229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.373277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.373404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.373438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.373618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.373650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.373758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.373791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.374069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.374101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.374215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.374248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.374354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.374386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.374501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.374533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.374643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.374674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.374866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.374898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.375074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.375106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.375316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.375348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.375454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.375491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.375607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 16:28:48.375639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 16:28:48.375827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.375859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.376030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.376062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.376176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.376216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.376332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.376363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.376498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.376532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.376639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.376671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.376795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.376827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.376932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.376964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.377086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.377118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.377295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.377329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.377514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.377546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.377653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.377686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.377822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.377854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.377966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.377998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.378104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.378135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.378247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.378281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.378397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.378429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.378544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.378577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.378763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.378797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.378970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.379000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.379189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.379228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.379356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.379388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.379571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.379604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.379716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.379748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.379853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.379884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.380012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.380050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.380235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.380268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.380387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.380418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.380544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.380575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.380757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.380788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.381026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.381058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.381244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.381277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.381412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.381443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.381557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 16:28:48.381588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 16:28:48.381708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.381740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.381860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.381892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.382016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.382048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.382172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.382210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.382459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.382493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.382612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.382645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.382750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.382783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.382886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.382918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.383106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.383137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.383269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.383302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.383547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.383578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.383687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.383718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.383825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.383857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.383977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.384009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.384179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.384218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.384359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.384392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.384516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.384547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.384652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.384684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.384896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.384934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.385051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.385084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.385253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.385287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.385399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.385431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.385625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.385658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.385772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.385804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.386043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.386075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.386251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.386285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.386473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.386505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.386624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.386656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.386848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.386880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.387056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.387089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.387361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.387394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.387569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.387600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.387794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.387826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.388087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.388119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.388384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.388417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.388543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.388576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.388767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.388800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.388925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.388958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 16:28:48.389089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 16:28:48.389121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.389253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.389287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.389479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.389511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.389633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.389664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.389906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.389938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.390107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.390138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.390273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.390307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.390578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.390609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.390788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.390822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.390956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.390988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.391100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.391131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.391267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.391308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.391438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.391471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.391645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.391677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.391850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.391882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.391998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.392030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.392150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.392182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.392367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.392400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.392522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.392555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.392736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.392767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.392872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.392904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.393010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.393048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.393251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.393285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.393458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.393491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.393701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.393734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.393857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.393888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.394008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.394040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.394222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.394254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.394380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.394413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.394629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.394661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.394845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.394879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.395058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.395090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.395213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.395248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.395356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.395387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.395574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.395607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.395870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.395903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.396075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.396107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.396281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.396314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 16:28:48.396436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 16:28:48.396468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.396591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.396623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.396741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.396773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.396958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.396991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.397136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.397168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.397296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.397331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.397506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.397538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.397781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.397813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.397995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.398028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.398147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.398179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.398322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.398361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.398479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.398511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.398701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.398734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.398974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.399006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.399114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.399146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.399284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.399318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.399437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.399469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.399653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.399685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.399808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.399840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.399945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.399977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.400108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.400141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.400259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.400292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.400462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.400495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.400668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.400700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.400831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.400864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.401035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.401067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.401239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.401273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.401461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.401493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.401739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.401771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.402024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.402057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.402175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.402217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.402405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.402437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.402673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.402707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.402820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.402852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.403049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.403082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.403268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.403301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.403428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.403459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.403643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.403681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 16:28:48.403878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 16:28:48.403910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.404029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.404061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.404248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.404282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.404401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.404433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.404605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.404637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.404764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.404797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.404979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.405010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.405113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.405146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.405333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.405366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.405535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.405568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.405832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.405864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.405971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.406004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.406212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.406246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.406362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.406395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.406615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.406647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.406841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.406874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.407144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.407176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.407371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.407405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.407614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.407646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.407773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.407806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.407932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.407964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.408082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.408114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.408304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.408339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.408446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.408478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.408742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.408775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.408897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.408929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.409051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.409084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.409196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.409238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.409371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.409404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.409531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.409563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.409695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.409728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.409898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.409930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.410166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.410198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 16:28:48.410418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 16:28:48.410451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.410713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.410745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.410862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.410895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.411016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.411050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.411237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.411271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.411447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.411480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.411655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.411688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.411973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.412006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.412184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.412240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.412370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.412403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.412610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.412641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.412893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.412925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.413052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.413085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.413365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.413399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.413522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.413554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.413728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.413759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.413865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.413897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.414068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.414101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.414293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.414327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.414525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.414557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.414745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.414778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.414911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.414946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.415184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.415227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.415402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.415435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.415621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.415653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.415864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.415896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.416025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.416059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.416233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.416267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.416533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.416565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.416689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.416722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.416850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.416882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.417090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.417122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.417316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.417350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.417469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.417502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.417721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.417759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.417891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.417923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.418117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.418149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.418369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.418403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 16:28:48.418573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 16:28:48.418606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.418776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.418808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.418991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.419024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.419263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.419296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.419563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.419596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.419711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.419751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.419946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.419977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.420174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.420213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.420460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.420492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.420624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.420656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.420848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.420882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.421066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.421100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.421234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.421268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.421462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.421493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.421612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.421644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.421754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.421785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.421975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.422006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.422134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.422166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.422372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.422405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.422508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.422541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.422723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.422755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.422937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.422970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.423145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.423177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.423312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.423357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.423551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.423583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.423782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.423814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.423930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.423962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.424065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.424096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.424199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.424240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.424425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.424456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.424576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.424609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.424846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.424877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.425055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.425088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.425280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.425315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.425485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.425517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.425662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.425694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.425871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.425904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.426151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.426182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.426318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 16:28:48.426352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 16:28:48.426599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.426631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.426756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.426788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.427009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.427041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.427159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.427191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.427306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.427338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.427464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.427496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.427770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.427802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.428004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.428036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.428216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.428249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.428453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.428485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.428636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.428668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.428857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.428895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.429072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.429104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.429224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.429259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.429381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.429413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.429599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.429630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.429804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.429836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.429955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.429987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.430176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.430236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.430367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.430400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.430667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.430699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.430822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.430854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.430971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.431002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.431209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.431242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.431387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.431419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.431606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.431677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.431971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.432007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.432200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.432251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 16:28:48.432355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 16:28:48.432387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.432653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.432685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.432820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.432852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.433047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.433079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.433247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.433281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.433510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.433541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.433716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.433747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.433922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.433953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.434120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.434151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.434286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.434321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.434509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.434550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.434670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.434702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.434872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.434905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.435095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.435127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.435241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.435274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.435466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.435498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.435669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.435701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.435889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.435921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.436136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.436168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.436364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.436398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.436500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.436532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.436716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.436748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.436992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.437025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.437136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.437166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.437307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 16:28:48.437343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 16:28:48.437518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.437550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.437684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.437717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.437819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.437850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.437983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.438015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.438191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.438235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.438433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.438465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.438575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.438607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.438782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.438813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.438931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.438963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.439136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.439168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.439312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.439345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.439532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.439564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.439680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.439717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.439909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.439942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.440065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.440097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.440235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.440269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.440451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.440483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.440590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.440622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.440744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.440775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.440905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.440938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.441042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.441074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.441258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.441290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.441480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.441511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.441702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.441735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.441935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.441969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 16:28:48.442146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 16:28:48.442179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.442383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.442417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.442607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.442639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.442818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.442851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.442961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.442993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.443120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.443152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.443297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.443331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.443451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.443483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.443598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.443630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.443750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.443782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.443903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.443936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.444043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.444075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.444199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.444268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.444457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.444489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.444672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.444711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.444826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.444857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.444968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.445001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.445170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.445211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.445491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.445524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.445709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 16:28:48.445740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 16:28:48.445940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.445974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.446170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.446216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.446435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.446467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.446583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.446617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.446805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.446838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.446947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.446981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.447263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.447297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.447485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.447518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.447646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.447680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.447813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.447846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.447966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.447997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.448110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.448144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.448327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.448359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.448534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.448566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.448752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.448784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.448978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.449011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.449127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.449159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.449373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.449406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.449542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.449575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.449694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.449728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.449836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.449867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.450015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.450048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.450173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.450212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.450325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.450358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 16:28:48.450485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 16:28:48.450518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.450706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.450739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.450944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.450975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.451096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.451128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.451251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.451286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.451406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.451438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.451567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.451599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.451720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.451752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.451886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.451918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.452031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.452063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.452171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.452217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.452395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.452465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.452599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.452639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.452756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.452788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.452894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.452925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.453101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.453134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.453313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.453348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.453465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.453497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.453688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.453719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.453913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.453945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.454217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.454250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.454382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.454415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.454529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.454561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.454685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.454717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.454900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.454944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.455054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 16:28:48.455086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 16:28:48.455226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.455260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.455401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.455435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.455547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.455579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.455753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.455788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.456047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.456080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.456257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.456291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.456405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.456438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.456663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.456697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.456829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.456861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.456989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.457021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.457133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.457167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.457295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.457330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.457445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.457479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.457671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.457704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.457822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.457854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.457966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.457998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.458113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.458147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.458272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.458305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.458491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.458523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.458707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.458740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.458846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.458878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.458994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.459027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.459215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.459251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.459462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.459494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.459600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.459632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.459891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.459962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.460168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.460219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.460410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.460445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.460572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.460604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.460779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 16:28:48.460814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 16:28:48.460939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.460971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.461144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.461177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.461306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.461340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.461584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.461617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.461796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.461831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.462003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.462035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.462151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.462184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.462393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.462428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.462532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.462576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.462823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.462854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.463027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.463059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.463233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.463267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.463392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.463425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.463653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.463685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.463797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.463830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.463956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.463989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.464173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.464214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.464401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.464434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.464547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.464580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.464695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.464727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.464868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.464903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.465000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.465031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.465287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.465323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.465437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.465470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.465681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.465714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.465833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.465867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.465974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.466008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.466136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.466170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.466366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.466401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.466580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.466613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.466787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.466820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.466940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.466972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.467091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.467126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.467298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.467332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.467453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.467489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.467661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 16:28:48.467698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 16:28:48.467815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.467848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.467974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.468007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.468109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.468144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.468286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.468320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.468442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.468476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.468579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.468612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.468736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.468770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.468886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.468918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.469038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.469072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.469210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.469244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.469420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.469453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.469557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.469590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.469717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.469750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.469938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.469971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.470081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.470114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.470289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.470321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.470514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.470546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.470681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.470712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.470917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.470950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.471124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.471156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.471350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.471385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.471495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.471529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.471703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.471736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.471928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.471960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.472155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.472188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.472311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.472343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.472455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.472489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.472678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.472711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.472823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.472855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.472980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.473012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.473182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.473237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.473348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.473379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.473555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.473588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.473711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.473744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.473984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.474016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.474232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.474266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.474466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.474499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.474786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 16:28:48.474819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 16:28:48.474934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.474966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.475245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.475285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.475397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.475429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.475538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.475571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.475700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.475734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.475906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.475938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.476042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.476075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.476186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.476227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.476333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.476365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.476477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.476510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.476629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.476662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.476837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.476870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.477108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.477141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.477339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.477374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.477496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.477529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.477711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.477745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.477919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.477951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.478188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.478232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.478474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.478505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.478771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.478802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.478925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.478958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.479141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.479174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.479386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.479420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.479539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.479571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.479679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.479710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.479825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.479858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.480106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.480139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.480310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.480344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.480589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.480623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.480814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.480847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.481042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.481074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.481251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 16:28:48.481284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 16:28:48.481408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.481440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.481569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.481601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.481775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.481809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.481988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.482021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.482138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.482169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.482370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.482404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.482538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.482571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.482746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.482780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.482956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.482989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.483256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.483295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.483423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.483456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.483630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.483663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.483788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.483821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.483925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.483957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.484075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.484108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.484243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.484280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.484545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.484578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.484771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.484804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.484923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.484954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.485062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.485096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.485222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.485253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.485438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.485470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.485583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.485616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.485862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.485895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.486065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.486097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.486245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.486279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.486568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.486601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.486793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.486827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.486933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.486967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.487167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.487199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.487331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.487364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.487469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.487504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.487636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.487668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.487841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.487874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.488161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.488194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.488383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.488416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.488530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.488562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 16:28:48.488828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 16:28:48.488859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.488982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.489016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.489125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.489166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.489363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.489398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.489612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.489645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.489770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.489803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.489990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.490030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.490230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.490265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.490389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.490422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.490729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.490761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.490948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.490982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.491171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.491214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.491339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.491379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.491569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.491601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.491891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.491926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.492124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.492155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.492302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.492336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.492575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.492610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.492832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.492866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.493056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.493089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.493214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.493249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.493418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.493450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.493626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.493658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.493902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.493933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.494065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.494097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.494223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.494256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.494453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.494485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.494623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.494655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.494755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.494788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.494976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.495009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.495215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.495249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.495371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.495403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.495592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.495624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.495808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.495842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.496112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.496146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.496290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.496323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.496504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.496537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.496660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 16:28:48.496692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 16:28:48.496952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.496985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.497175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.497216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.497406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.497438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.497677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.497708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.497882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.497914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.498108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.498139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.498342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.498376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.498477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.498512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.498690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.498722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.498897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.498929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.499099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.499133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.499259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.499293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.499425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.499457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.499722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.499756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.499954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.499998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.500144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.500176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.500310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.500344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.500478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.500510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.500700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.500735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.500839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.500871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.501145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.501177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.501321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.501355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.501539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.501571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.501683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.501716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.501909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.501941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.502062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.502094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.502301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.502334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.502525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.502557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.502829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.502863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.503062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.503096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.503343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.503377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.503550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.503583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.503786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.503818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.504017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.504049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.504157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.504190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.504389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.504422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 16:28:48.504620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 16:28:48.504652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.504898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.504930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.505057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.505089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.505310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.505342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.505532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.505564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.505775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.505807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.505980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.506012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.506212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.506246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.506376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.506408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.506530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.506562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.506807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.506839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.507023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.507056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.507162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.507194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.507444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.507477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.507617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.507649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.507832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.507864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.508041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.508073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.508259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.508292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.508411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.508449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.508560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.508592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.508831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.508863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.509042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.509075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.509193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.509237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.509374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.509407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.509535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.509568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.509684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.509719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.509906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.509937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.510182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.510220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.510410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.510443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.510631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.510663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.510906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.510939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.511115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.511146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.511282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.511317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.511595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 16:28:48.511627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 16:28:48.511756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.511788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.511908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.511939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.512064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.512096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.512223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.512257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.512444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.512477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.512597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.512630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.512743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.512775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.512876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.512909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.513023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.513057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.513242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.513274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.513544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.513576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.513703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.513736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.513909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.513940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.514128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.514160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.514422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.514454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.514563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.514595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.514766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.514799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.514972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.515014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.515147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.515179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.515370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.515403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.515531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.515563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.515748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.515780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.515882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.515913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.516088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.516120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.516293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.516333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.516518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.516550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.516685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.516715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.516839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.516870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.516981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.517014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.517123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.517155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.517403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.517437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.517558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.517590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.517694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.517726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.517920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.517951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.518192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.518232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.518413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.518447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.518556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.518588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 16:28:48.518715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 16:28:48.518747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.518939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.518971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.519165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.519197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.519376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.519409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.519525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.519559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.519681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.519713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.519980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.520013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.520121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.520155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.520297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.520332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.520514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.520547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.520671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.520704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.520806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.520840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.520968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.521001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.521106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.521139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.521337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.521373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.521491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.521523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.521649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.521681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.521805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.521837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.521960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.521993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.522128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.522161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.522391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.522425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.522599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.522632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.522751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.522783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.522971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.523004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.523195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.523239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.523415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.523448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.523617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.523651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.523871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.523909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.524083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.524117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.524241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.524274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.524394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.524427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.524551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.524582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.524697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.524730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.524865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.524898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.525098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.525130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.525236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.525268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.525440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.525472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.525702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 16:28:48.525733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 16:28:48.525861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.525893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.526012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.526044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.526256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.526289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.526475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.526507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.526642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.526673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.526797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.526829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.526945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.526976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.527149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.527181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.527310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.527344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.527531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.527564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.527680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.527715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.527820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.527852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.527967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.528000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.528197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.528264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.528387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.528420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.528538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.528571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.528891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.528963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.529165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.529221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.529408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.529443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.529560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.529593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.529698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.529731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.529913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.529946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.530069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.530102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.530228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.530264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.530453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.530486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.530624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.530657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.530828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.530861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.531102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.531134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.531261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.531293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.531500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.531533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.531812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.531845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.532029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.532062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.532179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.532221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.532349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.532381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.532579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.532610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.532793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.532825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.533029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.533063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 16:28:48.533178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 16:28:48.533219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.533406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.533438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.533566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.533598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.533719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.533752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.533972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.534003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.534198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.534240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.534426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.534463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.534648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.534680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.534860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.534894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.535081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.535116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.535242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.535276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.535411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.535444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.535633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.535665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.535779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.535811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.535987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.536019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.536191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.536234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.536423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.536455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.536642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.536675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.536848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.536880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.537005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.537037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.537150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.537182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.537313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.537348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.537465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.537499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.537702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.537733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.537909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.537943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.538120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.538152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.538335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.538369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.538481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.538513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.538687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.538719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.538916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.538947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.539067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.539099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.539242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.539276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.539460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.539492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.539677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.539709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.539827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.539861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.540043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.540076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.540281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.540317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.540444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.540475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.540595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.540628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.540746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 16:28:48.540779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 16:28:48.541021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.541054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.541180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.541222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.541340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.541373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.541496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.541529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.541654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.541687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.541802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.541835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.541954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.541991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.542228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.542263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.542391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.542423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.542596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.542628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.542751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.542783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.543043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.543076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.543256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.543290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.543395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.543427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.543645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.543677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.543790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.543822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.543997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.544030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.544237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.544270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.544471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.544506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.544698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.544730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.544852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.544884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.544989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.545019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.545147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.545179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.545380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.545413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.545531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.545563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.545689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.545721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.545960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.545993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.546175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.546214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.546328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.546361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.546478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.546510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.546760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.546791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.546969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.547001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.547107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.547139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.547265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.547298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.547406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.547437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.547606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 16:28:48.547638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 16:28:48.547876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.547908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.548021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.548052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.548244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.548276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.548468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.548500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.548697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.548728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.548855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.548887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.549010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.549041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.549143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.549175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.549353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.549425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.549646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.549682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.549875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.549918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.550174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.550225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.550413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.550447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.550650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.550684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.550816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.550848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.550970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.551002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.551191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.551242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.551462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.551495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.551670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.551703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.551899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.551933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.552113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.552144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.552329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.552365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.552543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.552577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.552697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.552729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.552909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.552942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.553073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.553105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.553285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.553319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.553608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.553642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.553762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.553795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.553998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.554032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.554154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 16:28:48.554188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 16:28:48.554390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.554424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.554598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.554629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.554816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.554849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.554973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.555005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.555133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.555164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.555288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.555321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.555560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.555598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.555780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.555811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.555952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.555984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.556099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.556132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.556320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.556358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.556508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.556540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.556671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.556703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.556967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.556999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.557177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.557222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.557345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.557377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.557498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.557529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.557643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.557675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.557865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.557896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.558093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.558126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.558314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.558349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.558448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.558480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.558720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.558752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.558861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.558893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.559033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.559066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.559217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.559250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.559388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.559424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.559547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.559579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.559686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.408 [2024-11-20 16:28:48.559719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.408 qpair failed and we were unable to recover it. 00:27:17.408 [2024-11-20 16:28:48.559960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.559993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.560101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.560133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.560252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.560286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.560403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.560436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.560623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.560660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.560839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.560872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.561061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.561093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.561285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.561319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.561573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.561605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.561856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.561888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.562068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.562100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.562222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.562255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.562368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.562400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.562581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.562614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.562739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.562781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.562971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.563016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.563229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.563276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.563420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.563457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.563628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.563698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.563961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.563998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.564185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.564235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.564364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.564397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.564512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.564543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.564684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.564718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.564906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.564938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.565067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.565098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.565290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.565326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.565565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.565598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.565702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.409 [2024-11-20 16:28:48.565732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.409 qpair failed and we were unable to recover it. 00:27:17.409 [2024-11-20 16:28:48.565977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.410 [2024-11-20 16:28:48.566009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.410 qpair failed and we were unable to recover it. 00:27:17.410 [2024-11-20 16:28:48.566128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.410 [2024-11-20 16:28:48.566158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.410 qpair failed and we were unable to recover it. 00:27:17.410 [2024-11-20 16:28:48.566293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.410 [2024-11-20 16:28:48.566334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.410 qpair failed and we were unable to recover it. 00:27:17.410 [2024-11-20 16:28:48.566543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.410 [2024-11-20 16:28:48.566576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.410 qpair failed and we were unable to recover it. 00:27:17.410 [2024-11-20 16:28:48.566755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.410 [2024-11-20 16:28:48.566786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.410 qpair failed and we were unable to recover it. 00:27:17.410 [2024-11-20 16:28:48.566913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.410 [2024-11-20 16:28:48.566944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.410 qpair failed and we were unable to recover it. 00:27:17.410 [2024-11-20 16:28:48.567118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.410 [2024-11-20 16:28:48.567149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.410 qpair failed and we were unable to recover it. 00:27:17.410 [2024-11-20 16:28:48.567304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.410 [2024-11-20 16:28:48.567337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.410 qpair failed and we were unable to recover it. 00:27:17.410 [2024-11-20 16:28:48.567481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.410 [2024-11-20 16:28:48.567513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.410 qpair failed and we were unable to recover it. 00:27:17.410 [2024-11-20 16:28:48.567630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 16:28:48.567661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 16:28:48.567836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 16:28:48.567869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 16:28:48.568044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 16:28:48.568076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.568193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.568233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.568356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.568389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.568582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.568615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.568820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.568854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.569017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.569048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.569167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.569198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.569351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.569382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.569587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.569617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.569749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.569779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.569897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.569927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.570172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.570213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.570340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.570371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.570474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.570504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.570612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.570642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.570822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.570853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.570969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.570998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.571120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.571151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.571342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.571410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.571648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.571685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.571809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.571840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.572031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.572063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.572239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.572273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.572487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.572521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.572700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.572730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.572923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.572955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.573079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.573109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.573309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.573341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.573529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.573561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.573671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.573702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.573809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.573840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.573945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.573986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.574161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.574193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.574332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.574363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.574562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.574593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.574713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.574744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.574914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.574945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 16:28:48.575079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 16:28:48.575109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.575226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.575259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.575365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.575395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.575510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.575543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.575718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.575750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.575938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.575971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.576075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.576107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.576224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.576258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.576459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.576492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.576676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.576708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.576920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.576951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.577059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.577091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.577270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.577302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.577542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.577574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.577712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.577744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.577866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.577897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.578158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.578190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.578332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.578365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.578545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.578581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.578703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.578734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.578863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.578893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.579071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.579104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.579296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.579329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.579514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.579546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.579791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.579824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.580065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.580096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.580284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.580318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.580482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.580514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.580687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.580718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.580826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.580859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.581047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.581079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.581267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.581316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.581429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.581462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.581703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.581736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.581843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.581881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.582062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.582094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.582290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.582324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.582447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 16:28:48.582479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 16:28:48.582590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.582623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.582800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.582831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.582939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.582972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.583217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.583251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.583397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.583431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.583609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.583641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.583831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.583865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.584045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.584079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.584216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.584251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.584435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.584468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.584596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.584628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.584750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.584782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.584966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.584999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.585131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.585162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.585312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.585345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.585453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.585486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.585667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.585698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.585876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.585908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.586082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.586114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.586236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.586269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.586378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.586410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.586520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.586553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.586673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.586705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.586871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.586942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.587080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.587120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.587320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.587355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.587468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.587501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.587690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.587722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.587829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.587861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.587993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.588025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.588229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.588262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.588441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.588473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.588651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.588684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.588872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.588905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.589090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.589122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.589382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.589416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.589532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.589564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.589756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 16:28:48.589788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 16:28:48.590033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.590065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.590192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.590233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.590360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.590393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.590659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.590692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.590796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.590829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.591020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.591052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.591247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.591280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.591404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.591436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.591632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.591664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.591838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.591870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.591976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.592008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.592130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.592163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.592361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.592394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.592575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.592607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.592777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.592810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.592924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.592955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.593130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.593163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.593297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.593330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.593472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.593504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.593709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.593741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.593926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.593959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.594081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.594114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.594358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.594391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.594508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.594541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.594681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.594713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.594833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.594872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.595066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.595099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.595299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.595332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.595460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.595493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.595669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.595701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.595815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.595848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.596023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.596056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.596161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.596193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.596331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.596364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.596468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.596501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.596611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.596643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.596766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.596799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 16:28:48.596971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 16:28:48.597004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.597178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.597218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.597419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.597451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.597561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.597594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.597730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.597762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.597878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.597910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.598095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.598128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.598316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.598349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.598494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.598527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.598650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.598683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.598875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.598907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.599024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.599057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.599184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.599226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.599330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.599363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.599561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.599593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.599778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.599811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.599986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.600019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.600188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.600232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.600404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.600437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.600561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.600593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.600725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.600757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.600859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.600892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.601130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.601163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.601279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.601313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.601415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.601447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.601622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.601654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.601770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.601802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.601907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.601940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.602067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.602105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.602286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.602321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.602432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.602463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.602582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.602615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.602800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 16:28:48.602832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 16:28:48.603009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.603041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.603166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.603199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.603333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.603366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.603574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.603606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.603732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.603764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.603945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.603978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.604095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.604127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.604238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.604269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.604487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.604519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.604648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.604681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.604799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.604831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.605014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.605047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.605221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.605254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.605375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.605407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.605527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.605558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.605756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.605787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.605897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.605930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.606121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.606152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.606361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.606395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.606502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.606534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.606657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.606689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.606813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.606846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.607027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.607059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.607173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.607234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.607354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.607387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.607560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.607592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.607711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.607744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.607933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.607966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.608078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.608110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.608226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.608260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.608395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.608429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.608535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.608567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.608751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.608783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.608895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.608928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.609058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.609090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.609289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.609327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.609454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.609486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 16:28:48.609708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 16:28:48.609740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.609864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.609896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.610069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.610101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.610221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.610254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.610427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.610460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.610571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.610603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.610800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.610832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.610971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.611004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.611109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.611141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.611322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.611356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.611486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.611518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.611620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.611652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.611769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.611802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.611998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.612030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.612213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.612247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.612373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.612406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.612530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.612563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.612803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.612835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.612952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.612984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.613174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.613215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.613482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.613514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.613694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.613726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.613987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.614020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.614149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.614181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.614412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.614445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.614625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.614658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.614829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.614861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.614973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.615005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.615121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.615153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.615282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.615315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.615499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.615531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.615726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.615758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.615936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.615969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.616091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.616124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.616254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.616290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.616406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.616439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.616635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.616668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.616840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.616873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 16:28:48.617088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 16:28:48.617127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.617364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.617398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.617582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.617615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.617823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.617856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.617976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.618009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.618212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.618246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.618368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.618402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.618520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.618552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.618744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.618776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.618904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.618937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.619177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.619219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.619328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.619361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.619473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.619506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.619649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.619681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.619809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.619842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.620034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.620067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.620238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.620272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.620406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.620438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.620608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.620641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.620766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.620799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.620903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.620935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.621131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.621164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.621433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.621466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.621696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.621729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.621839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.621872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.621981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.622012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.622256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.622290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.622535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.622568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.622680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.622712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.622847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.622879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.623058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.623090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.623220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.623254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.623516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.623549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.623759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 16:28:48.623791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 16:28:48.623914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.623946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.624075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.624107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.624279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.624313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.624503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.624535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.624655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.624687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.624861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.624893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.625071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.625108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.625220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.625254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.625382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.625413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.625528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.625560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.625670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.625702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.625819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.625851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.626036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.626069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.626264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.626296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.626537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.626572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.626762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.626795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.626911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.626942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.627114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.627147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.627285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.627318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.627628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.627660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.627904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.627936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.628058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.628090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.628274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.628308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.628422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.628453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.628630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.628663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.628851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.628882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.629087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.629120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.629256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.629290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.629467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.629499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.629611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.629644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.629824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.629856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.629982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.630013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.630191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.630231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.630497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.630530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.630668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.630701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.630808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.630841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.630958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.630991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 16:28:48.631174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 16:28:48.631214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.631408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.631440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.631550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.631582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.631785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.631817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.631928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.631960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.632109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.632141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.632279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.632312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.632429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.632461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.632577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.632609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.632801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.632838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.633020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.633052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.633156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.633188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.633318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.633351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.633644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.633676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.633818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.633849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.634058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.634091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.634319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.634353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.634459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.634491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.634621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.634653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.634904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.634936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.635055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.635088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.635216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.635250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.635383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.635415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.635531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.635564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.635756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.635789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.635979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.636011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.636142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.636174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.636315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.636348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.636532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.636564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.636700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.636732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.636840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.636873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.636991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.637023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 16:28:48.637215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 16:28:48.637249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.637358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.637391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.637561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.637592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.637709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.637741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.637940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.637973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.638146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.638178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.638298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.638331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.638452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.638484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.638597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.638629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.638733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.638765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.638948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.638981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.639091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.639124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.639366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.639400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.639525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.639557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.639667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.639698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.639803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.639834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.639940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.639972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.640092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.640130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.640384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.640418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.640535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.640567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.640700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.640732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.640838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.640870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.641043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.641075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.641178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.641217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.641325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.641357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.641470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.641503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.641696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.641729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.641854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.641886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.642012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.642045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.642226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.642260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.642368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.642400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.642598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.642631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.642744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.642776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.642913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.642944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.643097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.643129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.643249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.643289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.643414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.643447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.643617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.643650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 16:28:48.643772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 16:28:48.643805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.643927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.643960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.644074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.644106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.644250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.644282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.644482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.644515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.644663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.644696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.644873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.644943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.645180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.645232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.645356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.645389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.645497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.645530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.645640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.645673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.645865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.645897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.646078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.646111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.646230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.646264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.646513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.646544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.646652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.646683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.646897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.646930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.647187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.647231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.647370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.647401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.647509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.647542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.647672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.647706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.647834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.647866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.648042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.648075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.648186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.648229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.648421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.648453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.648628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.648661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.648794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.648827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.648939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.648971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.649085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.649117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.649362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.649396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.649596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.649629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.649757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.649790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.649914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.649946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.650133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.650168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.650324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.650358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.650482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.650515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.650684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.650715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.650899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 16:28:48.650932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 16:28:48.651055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.651087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.651222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.651255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.651368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.651400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.651516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.651548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.651790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.651822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.652013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.652045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.652172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.652212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.652394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.652427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.652564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.652602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.652878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.652910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.653099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.653132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.653258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.653292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.653485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.653517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.653698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.653730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.653841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.653873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.654001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.654033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.654166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.654199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.654398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.654431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.654611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.654643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.654814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.654847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.654963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.654996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.655178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.655218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.655360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.655393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.655518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.655551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.655718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.655750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.655939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.655972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.656086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.656119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.656321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.656354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.656469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.656501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.656670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.656703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.656879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.656911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.657041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.657073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.657191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.657234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.657368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.657400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.657533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.657565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.657754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.657792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.657909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.657941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.658120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 16:28:48.658153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 16:28:48.658359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.658393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.658581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.658614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.658731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.658762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.658876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.658908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.659024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.659056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.659384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.659417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.659521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.659554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.659673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.659706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.659823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.659854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.659963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.659996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.660132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.660165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.660283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.660316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.660444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.660477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.660661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.660695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.660804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.660836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.661099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.661132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.661306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.661340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.661455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.661489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.661623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.661655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.661772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.661805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.662071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.662104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.662323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.662358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.662602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.662634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.662828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.662860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.663056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.663089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.663225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.663258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.663457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.663489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.663724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.663756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.663890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.663922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.664043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.664075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.664185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.664227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.664434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.664466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.664594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.664626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 16:28:48.664749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 16:28:48.664782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.664964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.664996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.665175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.665213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.665403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.665436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.665539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.665577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.665841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.665873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.665975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.666008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.666215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.666248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.666370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.666403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.666635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.666667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.666777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.666809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.666919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.666952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.667056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.667088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.667194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.667238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.667373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.667406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.667605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.667637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.667763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.667795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.667911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.667944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.668136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.668169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.668361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.668395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.668523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.668556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.668749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.668782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.668893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.668925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.669100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.669132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.669260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.669294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.669416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.669448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.669583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.669616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.669739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.669771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.669959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.669991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.670181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.670220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.670340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.670373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.670487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.670519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.670695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.670728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.670968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.671000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.671169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.671210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.671332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.671364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.671478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.671510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.671617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 16:28:48.671650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 16:28:48.671823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.671856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.671993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.672025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.672221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.672254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.672437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.672469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.672572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.672604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.672725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.672758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.672882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.672925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.673098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.673130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.673305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.673340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.673539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.673572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.673763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.673795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.673902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.673935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.674047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.674080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.674216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.674249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.674372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.674405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.674519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.674551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.674753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.674786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.674888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.674921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.675106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.675138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.675371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.675405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.675521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.675553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.675792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.675824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.676004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.676036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.676147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.676180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.676329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.676362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.676490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.676522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.676709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.676742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.676849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.676881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.677053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.677087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.677191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.677233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.677475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.677507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.677618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.677650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.677752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.677784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.678028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.678060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.678193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.678236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.678412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.678445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.678621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.678653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.678917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 16:28:48.678949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 16:28:48.679084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.679118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.679230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.679263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.679476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.679508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.679695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.679727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.679911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.679943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.680051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.680083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.680260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.680293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.680466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.680498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.680729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.680767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.680881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.680913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.681028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.681060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.681173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.681214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.681328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.681360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.681484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.681516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.681729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.681761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.681864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.681897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.682014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.682047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.682234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.682266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.682441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.682474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.682579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.682611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.682783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.682815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.683055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.683087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.683226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.683260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.683403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.683436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.683542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.683573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.683689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.683722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.683860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.683892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.684069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.684102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.684320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.684353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.684474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.684506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.684694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.684727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.684842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.684874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.684987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.685019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.685156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.685189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.685348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.685385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.685506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.685540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.685648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.685680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 16:28:48.685875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 16:28:48.685906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.686034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.686066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.686186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.686228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.686335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.686367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.686488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.686520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.686641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.686673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.686866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.686897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.687003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.687034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.687141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.687173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.687423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.687457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.687629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.687660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.687787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.687826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.687931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.687962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.688129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.688161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.688323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.688356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.688525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.688557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.688663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.688694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.688878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.688910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.689017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.689055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.689164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.689197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.689318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.689349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.689523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.689555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.689794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.689825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.689933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.689966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.690138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.690170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.690369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.690402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.690512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.690544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.690715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.690747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.690850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.690881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.691056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.691088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.691191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.691235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.691353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.691385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.691556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.691586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.691705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.691735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.691931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.691961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.692064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.692093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.692218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.692251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.692465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.692497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 16:28:48.692709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 16:28:48.692740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.692921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.692954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.693127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.693158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.693278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.693311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.693551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.693582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.693782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.693814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.694019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.694050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.694224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.694259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.694448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.694480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.694581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.694613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.694828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.694860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.694985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.695017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.695131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.695162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.695281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.695320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.695495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.695527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.695728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.695759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.695935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.695967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.696077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.696108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.696282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.696316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.696445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.696476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.696674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.696706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.696843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.696874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.697116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.697148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.697361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.697394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.697511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.697543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.697658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.697689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.697862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.697894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.698028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.698060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.698269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.698303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.698552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.698584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.698876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.698908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.699175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 16:28:48.699213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 16:28:48.699347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.699379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.699499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.699532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.699639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.699671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.699861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.699892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.700068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.700100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.700219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.700251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.700500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.700532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.700733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.700764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.700875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.700908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.701089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.701122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.701355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.701389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.701562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.701593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.701713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.701745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.701930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.701961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.702135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.702167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.702350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.702384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.702520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.702552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.702754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.702786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.702959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.702991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.703168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.703200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.703428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.703460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.703749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.703787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.703916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.703947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.704067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.704099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.704294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.704328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.704454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.704486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.704595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.704628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.704750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.704782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.704961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.704993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.705259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.705294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.705495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.705527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.705709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.705741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.705846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.705877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.706067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.706099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.706316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.706348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.706490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.706522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.706743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.706776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.706953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 16:28:48.706984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 16:28:48.707223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.707257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.707382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.707414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.707520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.707552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.707726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.707757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.707930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.707970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.708089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.708121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.708243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.708277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.708449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.708481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.708612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.708644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.708819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.708851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.708960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.708992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.709190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.709236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.709428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.709462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.709644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.709676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.709876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.709908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.710098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.710131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.710239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.710272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.710442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.710474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.710606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.710638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.710754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.710785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.710889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.710922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.711045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.711078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.711200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.711242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.711356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.711399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.711550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.711581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.711753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.711785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.711972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.712004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.712287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.712321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.712496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.712528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.712717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.712749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.712929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.712960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.713098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.713130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.713257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.713291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.713541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.713573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.713726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.713757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.713942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.713975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.714151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.714183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 16:28:48.714450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 16:28:48.714483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.714602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.714634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.714817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.714848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.715076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.715108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.715301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.715335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.715539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.715571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.715837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.715868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.716019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.716050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.716173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.716216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.716411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.716442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.716653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.716685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.716815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.716847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.717039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.717071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.717252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.717326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.717530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.717566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.717762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.717796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.717968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.717999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.718216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.718249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.718362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.718394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.718659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.718690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.718865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.718897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.719024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.719056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.719241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.719275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.719387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.719419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.719549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.719582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.719781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.719813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.719987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.720029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.720223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.720258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.720396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.720427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.720603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.720637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.720882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.720915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.721178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.721223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.721397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.721429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.721548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.721580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.721844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.721876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.722017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.722049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.722244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.722278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.722394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.722426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 16:28:48.722541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 16:28:48.722573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.722817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.722850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.723122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.723154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.723285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.723319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.723497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.723528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.723698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.723730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.723932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.723964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.724101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.724132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.724372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.724406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.724534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.724566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.724700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.724732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.724937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.724969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.725154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.725186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.725312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.725344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.725534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.725566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.725858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.725929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.726134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.726168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.726426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.726461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.726638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.726670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.726915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.726947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.727125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.727157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.727344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.727378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.727616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.727648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.727766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.727799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.727929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.727960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.728144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.728176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.728327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.728360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.728538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.728570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.728706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.728738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.728883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.728915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.729089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.729121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.729255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.729289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.729475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.729508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.729763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.729794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.730003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.730035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.730275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.730308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.730499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.730530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.730651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.730683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 16:28:48.730873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 16:28:48.730905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.731080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.731111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.731301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.731334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.731544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.731576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.731759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.731797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.731971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.732003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.732186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.732230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.732479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.732511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.732621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.732653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.732857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.732889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.733013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.733045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.733287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.733321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.733441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.733473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.733667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.733699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.733825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.733857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.734034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.734066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.734334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.734368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.734558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.734590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.734863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.734895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.735096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.735128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.735318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.735352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.735588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.735620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.735740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.735773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.735978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.736010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.736259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.736292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.736427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.736458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.736647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.736680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.736923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.736954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.737071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.737104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.737351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.737385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.737578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.737610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.737856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.737895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.738097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.738130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.738378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.738411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 16:28:48.738676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 16:28:48.738708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.738895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.738927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.739221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.739254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.739390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.739422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.739679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.739711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.739900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.739933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.740118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.740150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.740310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.740344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.740608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.740640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.740829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.740861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.741095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.741126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.741323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.741357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.741573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.741604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.741850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.741883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.742064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.742096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.742286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.742319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.742566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.742597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.742861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.742894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.743079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.743110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.743242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.743275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.743410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.743443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.743703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.743735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.743864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.743897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.744138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.744170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.744370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.744408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.744587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.744620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.744808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.744840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.745012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.745044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.745227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.745261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.745498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.745529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.745792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.745823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.746063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.746095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.746223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.746257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.746518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.746550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.746740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.746772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.747010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.747043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.747254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.747288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.747488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.747520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 16:28:48.747723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 16:28:48.747756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.747940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.747972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.748159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.748190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.748491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.748524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.748661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.748692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.748947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.748980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.749171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.749213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.749401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.749432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.749697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.749729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.749917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.749949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.750146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.750178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.750403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.750436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.750701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.750733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.750997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.751028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.751249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.751283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.751474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.751507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.751765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.751797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.751986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.752019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.752291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.752325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.752543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.752574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.752768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.752801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.753011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.753043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.753229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.753262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.753500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.753532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.753707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.753739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.753919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.753951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.754142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.754174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.754304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.754337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.754540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.754573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.754705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.754738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.754866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.754897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.755071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.755103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.755282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.755316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.755557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.755589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.755797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.755829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.756050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.756082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.756270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.756304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.756556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.756587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.756755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 16:28:48.756788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 16:28:48.756971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.757003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.757256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.757290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.757509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.757542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.757720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.757752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.757925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.757957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.758076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.758108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.758373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.758407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.758539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.758571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.758780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.758813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.758933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.758966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.759170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.759209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.759338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.759371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.759563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.759596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.759786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.759818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.760059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.760091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.760356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.760401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.760521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.760553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.760792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.760825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.761013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.761044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.761302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.761336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.761519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.761551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.761729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.761762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.762026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.762058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.762231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.762265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.762446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.762477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.762679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.762710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.762971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.763004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.763121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.763153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.763338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.763371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.763505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.763538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.763732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.763764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.764027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.764060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.764178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.764231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.764432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.764464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.764656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.764689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.764820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.764852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.765036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.765068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.765226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 16:28:48.765259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 16:28:48.765449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.765481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.765686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.765718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.765932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.765964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.766070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.766102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.766218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.766256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.766504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.766536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.766776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.766808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.767006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.767038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.767348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.767382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.767618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.767651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.767845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.767878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.768073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.768104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.768238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.768272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.768462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.768493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.768686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.768718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.768890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.768922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.769133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.769165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.769303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.769336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.769466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.769498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.769693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.769725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.769899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.769932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.770109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.770141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.770414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.770447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.770687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.770718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.770893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.770924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.771051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.771083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.771222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.771256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.771379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.771411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.771608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.771640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.771848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.771879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.772057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.772089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.772300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.772334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.772484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.772516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.772687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 16:28:48.772720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 16:28:48.772981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.773012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.773220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.773254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.773431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.773463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.773635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.773667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.773914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.773945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.774197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.774238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.774423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.774455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.774700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.774732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.774978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.775010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.775200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.775241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.775453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.775485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.775683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.775716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.775909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.775942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.776138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.776170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.776380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.776416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.776540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.776571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.776747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.776779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.776955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.776987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.777253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.777287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.777535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.777567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.777673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.777705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.777968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.778000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.778174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.778212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.778468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.778501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.778764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.778796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.778978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.779010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.779184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.779226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.779434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.779466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.779597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.779628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.779870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.779904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.780076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.780107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.780296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.780330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.780594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.780625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.780810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.780842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.781064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.781097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.781236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.781270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.781377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.781408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.781649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 16:28:48.781682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 16:28:48.781973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.782010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.782153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.782185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.782392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.782425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.782564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.782596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.782805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.782837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.783034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.783067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.783243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.783276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.783465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.783498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.783626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.783658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.783770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.783802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.783999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.784031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.784224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.784257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.784503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.784535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.784796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.784828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.785049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.785082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.785348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.785382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.785574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.785605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.785778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.785810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.785943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.785976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.786169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.786209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.786476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.786508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.786680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.786713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.786949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.786981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.787182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.787223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.787349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.787381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.787504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.787537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.787823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.787855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.788039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.788076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.788257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.788291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.788424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.788456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.788723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.788755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.788883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.788916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.789105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.789136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.789421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 16:28:48.789454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 16:28:48.789649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.789681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.789785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.789817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.790001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.790033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.790224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.790258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.790439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.790471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.790712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.790744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.790987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.791018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.791162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.791195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.791397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.791430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.791702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.791734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.792000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.792032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.792162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.792195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.792427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.792458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.792745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.792778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.792971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.793004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.793125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.793157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.793290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.793323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.793564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.793597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.793791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.793823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.794010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.794042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.794168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.794215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.794399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.794437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.794612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.794644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.794827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.794860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.794963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.794996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.795246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.795279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.795522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.795555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.795683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.795715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.795982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.796015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.796236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.796270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.796405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.796437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.796629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.796661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.796846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.796877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.796991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.797023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.797219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.797253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.797544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.797575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.797696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.797729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.797851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.797883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 16:28:48.798089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 16:28:48.798121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.798290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.798324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.798515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.798547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.798670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.798703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.798894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.798926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.799120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.799153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.799339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.799372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.799617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.799649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.799781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.799813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.800062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.800094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.800245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.800279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.800410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.800442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.800683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.800715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.800958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.800990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.801173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.801212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.801345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.801377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.801651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.801682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.801928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.801960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.802136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.802167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.802345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.802378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.802512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.802544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.802752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.802784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.802977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.803010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.803195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.803235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.803414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.803445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.803582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.803614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.803786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.803819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.804024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.804056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.804324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.804358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.804540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.804572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.804779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.804811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.805010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.805042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.805174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.805213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.805388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.805420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.805543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.805575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.805694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.805727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.805842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.805874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.806020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.806052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 16:28:48.806293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 16:28:48.806326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.806451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.806482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.806651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.806683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.806854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.806887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.807148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.807180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.807452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.807485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.807753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.807784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.808032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.808065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.808298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.808332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.808454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.808486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.808745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.808777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.808948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.808980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.809221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.809261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.809461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.809492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.809723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.809755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.809964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.809996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.810193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.810235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.810409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.810441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.810657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.810689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.810873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.810906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.811041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.811073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.811316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.811349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.811458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.811490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.811668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.811700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.811884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.811916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.812048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.812080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.812224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.812258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.812430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.812463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.812566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.812599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.812791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.812823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.813009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.813040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.813220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.813254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.813421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.813453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 16:28:48.813666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 16:28:48.813698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.813817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.813850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.814066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.814098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.814314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.814348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.814575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.814607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.814896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.814928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.815114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.815152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.815346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.815380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.815565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.815596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.815771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.815803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.815926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.815958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.816199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.816256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.816477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.816508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.816702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.816735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.816980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.817011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.817224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.817258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.817503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.817535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.817747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.817779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.817993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.818025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.818224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.818257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.818456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.818488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.818684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.818717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.818891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.818923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.819096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.819128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.819265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.819299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.819418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.819451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.819695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.819727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.819864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.819896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.820085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.820118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.820378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.820410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.820548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.820580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.820694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.820726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.820965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.820997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.821186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.821228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.821410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.821444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.821635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.821668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.821789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.821821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.822105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.822137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 16:28:48.822313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 16:28:48.822347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.822529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.822560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.822800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.822831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.823075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.823107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.823335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.823368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.823631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.823663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.823853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.823885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.824125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.824157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.824366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.824400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.824588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.824621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.824875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.824907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.825147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.825179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.825377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.825411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.825530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.825562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.825774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.825806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.826043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.826075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.826200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.826242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.826533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.826565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.826751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.826783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.826966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.826998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.827132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.827164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.827353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.827387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.827568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.827600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.827875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.827907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.828080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.828112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.828356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.828391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.828588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.828620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.828898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.828930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.829174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.829215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.829403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.829435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.829623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.829655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.829862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.829894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.830075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.830107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.830282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.830315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.830508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.830540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.830781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.830813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.830951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.830988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.831249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.831282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.831478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 16:28:48.831510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 16:28:48.831629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.831660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.831793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.831824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.832004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.832036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.832154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.832186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.832382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.832416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.832588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.832620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.832877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.832909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.833102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.833134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.833347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.833380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.833618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.833650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.833839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.833871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.834142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.834173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.834417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.834450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.834637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.834668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.834796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.834829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.835090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.835122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.835311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.835343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.835583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.835614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.835803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.835836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.836053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.836084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.836228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.836261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.836405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.836437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.836628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.836660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.836904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.836936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.837189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.837238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.837479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.837512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.837721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.837753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.837993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.838025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.838290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.838323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.838612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.838645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.838829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.838864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.839155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.839188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.839450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.839485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.839617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.839650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.839866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.839901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.840080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.840113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.840373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.840408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.840589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 16:28:48.840622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 16:28:48.840811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.840843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.841035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.841068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.841321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.841355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.841490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.841521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.841773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.841805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.841986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.842017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.842275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.842308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.842573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.842605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.842731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.842763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.842892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.842923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.843061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.843092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.843218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.843251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.843426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.843458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.843596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.843633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.843853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.843885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.844097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.844128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.844314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.844348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.844608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.844640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.844903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.844934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.845052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.845084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.845343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.845375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.845568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.845601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.845849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.845882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.846064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.846095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.846365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.846397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.846588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.846620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.846875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.846906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.847174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.847213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.847454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.847488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.847687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.847719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.847986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.848017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.848257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.848291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.848464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.848496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.848618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.848650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.848886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 16:28:48.848916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 16:28:48.849103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.849135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.849418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.849451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.849720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.849751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.849965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.849998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.850200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.850239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.850435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.850468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.850747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.850798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.850920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.850951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.851125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.851157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.851358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.851391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.851652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.851685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.851855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.851886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.852060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.852092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.852271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.852305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.852569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.852600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.852848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.852879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.853117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.853148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.853403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.853436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.853627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.853659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.853975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.854045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.854280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.854318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.854586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.854619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.854902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.854933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.855222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.855255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.855472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.855505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.855743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.855774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.855905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.855937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.856146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.856177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.856378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.856410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.856658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.856691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.856936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.856967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.857224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.857258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.857551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.857593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.857845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.857877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.858171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.858210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.858400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.858433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.858719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 16:28:48.858751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 16:28:48.858937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.858970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.859173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.859212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.859456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.859488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.859749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.859781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.860068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.860101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.860377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.860411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.860669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.860701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.860889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.860922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.861179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.861221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.861447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.861480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.861722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.861754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.861943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.861975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.862256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.862290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.862479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.862512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.862798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.862830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.863094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.863126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.863425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.863458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.863721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.863753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.863992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.864024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.864151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.864183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.864431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.864464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.864680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.864712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.864975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.865013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.865255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.865290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.865493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.865524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.865730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.865762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.866013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.866045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.866235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.866269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.866538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.866569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.866854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.866885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.867162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.867194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.867389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.867421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.867670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.867701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.867943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.867976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.868240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.868275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.868478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.868509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.868697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.868730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.868995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 16:28:48.869027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 16:28:48.869212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.869245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.869449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.869481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.869752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.869784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.869906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.869937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.870211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.870244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.870425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.870457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.870760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.870792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.870984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.871017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.871301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.871336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.871529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.871561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.871823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.871854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.872100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.872133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.872399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.872433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.872641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.872674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.872855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.872886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.873127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.873158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.873419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.873452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.873700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.873732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.873918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.873948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.874188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.874229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.874414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.874446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.874709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.874740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.874974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.875006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.875246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.875280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.875466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.875504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.875745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.875777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.876047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.876079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.876366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.876400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.876671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.876703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.876945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.876977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.877246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.877280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.877570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.877601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.877866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.877897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.878148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.878179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.878458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.878491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.878690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.878721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.878966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.878998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.879212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.879246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.879512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.879545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.879737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 16:28:48.879769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 16:28:48.880036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.880068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.880246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.880280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.880548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.880581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.880766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.880797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.881074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.881106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.881375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.881409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.881702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.881735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.881936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.881967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.882157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.882190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.882371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.882404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.882576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.882608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.882809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.882842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.883114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.883146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.883412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.883445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.883690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.883721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.884016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.884048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.884288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.884323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.884457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.884489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.884687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.884718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.884985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.885017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.885282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.885316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.885435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.885467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.885707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.885738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.886038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.886070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.886262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.886302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.886544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.886576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.886767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.886798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.887035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.887067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.887334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.887368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.887504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.887536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.887801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.887833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.888113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.888145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.888425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.888458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.888577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.888609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.888785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.888818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.888989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.889021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.889290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.889324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.889567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.889600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.889864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.889896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.890016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.890048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.890313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 16:28:48.890347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 16:28:48.890536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.890567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.890850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.890883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.891099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.891131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.891324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.891357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.891533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.891565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.891763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.891794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.891927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.891958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.892199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.892254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.892444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.892476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.892742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.892773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.892971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.893004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.893263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.893316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.893611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.893643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.893837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.893869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.894059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.894091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.894332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.894367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.894634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.894666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.894983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.895015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.895333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.895367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.895614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.895658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.895865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.895917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.896234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.896272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.896545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.896581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.896840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.896879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.897117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.897149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.897352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.897386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.897653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.897686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.897824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.897857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.898068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.898119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.898414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.898450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.898628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.898660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.898923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.898958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.899143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.899175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.899400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.899433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.899677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.899709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 16:28:48.900016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 16:28:48.900065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.900380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.900438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.900783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.900856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.901185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.901278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.901541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.901575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.901860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.901895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.902164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.902196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.902480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.902514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.902794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.902826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.903043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.903075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.903334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.903368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.903663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.903696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.903967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.903998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.904290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.904323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.904573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.904605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.904885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.904917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.905102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.905134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.905402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.905435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.905576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.022 [2024-11-20 16:28:48.905608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.022 qpair failed and we were unable to recover it. 00:27:18.022 [2024-11-20 16:28:48.905800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.905831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.906100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.906132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.906335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.906369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.906637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.906669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.906804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.906835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.907099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.907131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.907375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.907408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.907675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.907707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.907886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.907919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.908136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.908174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.908502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.908536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.908749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.908780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.909051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.909083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.909300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.909333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.909452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.909484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.909609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.909641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.909933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.909964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.910232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.910265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.910510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.910542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.910746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.910778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.911055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.911086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.911294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.911328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.911514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.911546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.911818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.911850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.912143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.912175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.912392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.912426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.912554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.912586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.912872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.912904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.913143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.913176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.913436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.913469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.913716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.913748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.913944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.913976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.914159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.914190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.914465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.914497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.914689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.914721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.914963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.914995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.915187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.915245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.023 [2024-11-20 16:28:48.915504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.023 [2024-11-20 16:28:48.915537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.023 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.915745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.915777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.916031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.916064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.916321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.916356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.916537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.916568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.916842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.916873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.917145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.917178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.917367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.917400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.917597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.917629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.917872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.917905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.918125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.918157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.918413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.918446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.918698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.918735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.918933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.918966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.919159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.919191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.919465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.919498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.919780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.919811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.920010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.920043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.920310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.920344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.920659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.920692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.920939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.920971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.921281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.921314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.921587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.921619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.921833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.921865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.922134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.922166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.922457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.922490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.922761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.922794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.923085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.923116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.923240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.923273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.923541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.923572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.923769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.923801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.924060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.924092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.924387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.924421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.924689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.924721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.924999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.925031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.925316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.925349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.925598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.925631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.925888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 16:28:48.925920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 16:28:48.926134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.926166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.926371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.926406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.926677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.926708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.926848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.926879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.927086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.927118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.927351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.927385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.927588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.927620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.927887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.927918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.928162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.928193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.928462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.928495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.928739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.928771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.929016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.929048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.929317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.929351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.929546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.929579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.929776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.929814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.929996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.930028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.930270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.930304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.930549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.930581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.930774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.930806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.931077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.931109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.931406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.931439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.931709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.931741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.932034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.932066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.932336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.932370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.932663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.932696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.932966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.932997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.933196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.933242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.933473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.933506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.933719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.933751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.934029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.934061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.934274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.934309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.934511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.934542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.934826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.934859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.935065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.935097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.935294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.935327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.935599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.935631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.935818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.935851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 16:28:48.936113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 16:28:48.936145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.936400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.936434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.936729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.936761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.937035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.937067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.937264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.937299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.937475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.937507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.937778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.937810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.938057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.938089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.938373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.938407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.938608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.938640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.938887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.938918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.939114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.939145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.939430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.939464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.939647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.939679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.939934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.939966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.940246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.940280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.940536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.940568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.940841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.940878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.941171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.941211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.941528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.941561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.941761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.941792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.942049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.942080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.942376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.942410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.942681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.942713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.942894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.942925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.943212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.943246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.943430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.943462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.943717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.943749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.944047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.944079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.944348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.944382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.944629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.944661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.944847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.944879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.945157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.945189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.945455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.945487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.945597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.945629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.945821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.945852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.946070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.946102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 16:28:48.946300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 16:28:48.946335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.946584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.946616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.946798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.946829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.947034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.947066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.947340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.947373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.947647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.947680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.947892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.947925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.948198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.948240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.948371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.948403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.948602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.948634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.948909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.948941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.949222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.949256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.949460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.949493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.949623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.949654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.949928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.949960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.950164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.950195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.950503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.950536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.950795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.950827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.951009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.951042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.951226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.951259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.951554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.951593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.951838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.951870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.952128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.952160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.952457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.952492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.952762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.952794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.952909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.952941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.953223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.953256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.953508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.953540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.953781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.953813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.954040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.954072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.954344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.954378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 16:28:48.954577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 16:28:48.954609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.954869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.954901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.955200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.955247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.955507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.955540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.955791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.955824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.956125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.956158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.956286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.956320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.956595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.956628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.956883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.956916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.957174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.957217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.957338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.957370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.957575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.957607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.957824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.957856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.958057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.958090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.958361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.958396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.958680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.958713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.958995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.959027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.959310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.959344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.959556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.959588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.959792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.959824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.960003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.960034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.960306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.960340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.960532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.960564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.960746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.960778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.961056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.961088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.961294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.961328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.961630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.961662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.961892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.961923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.962164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.962196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.962408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.962448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.962661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.962694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.962886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.962919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.963194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.963239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.963424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.963457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 16:28:48.963702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 16:28:48.963734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.963984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.964018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.964249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.964282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.964505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.964536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.964841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.964872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.965086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.965118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.965333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.965367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.965501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.965534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.965810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.965843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.966131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.966163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.966485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.966519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.966814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.966846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.967116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.967148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.967444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.967478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.967749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.967781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.968095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.968128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.968402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.968436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.968710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.968741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.969037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.969070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.969340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.969375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.969595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.969627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.969878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.969911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.970168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.970209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.970510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.970542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.970804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.970837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.971118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.971150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.971433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.971468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.971701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.971733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.972037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.972069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.972281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.972317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.972614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.972646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.972857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.972889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.973187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.973230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.973367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.973398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.973682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.973714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.973979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.974016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.974168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.974199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 16:28:48.974346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 16:28:48.974377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.974575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.974605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.974860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.974893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.975119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.975151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.975443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.975476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.975698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.975730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.975965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.975996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.976298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.976333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.976598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.976630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.976853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.976886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.977166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.977199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.977426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.977460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.977650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.977683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.977953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.977985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.978180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.978220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.978426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.978460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.978605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.978638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.978839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.978871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.979154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.979186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.979475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.979508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.979787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.979820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.980018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.980049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.980193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.980242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.980511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.980544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.980845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.980879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.981149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.981182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.981320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.981351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.981555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.981588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.981801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.981833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.982118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.982150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.982391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.982426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.982648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.982680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.982860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.982892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.983087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.983119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.983375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.983409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.983663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.983695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.983890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.983923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.984197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 16:28:48.984239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 16:28:48.984366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.984403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.984607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.984638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.984747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.984777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.984979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.985011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.985215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.985247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.985461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.985494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.985737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.985769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.986038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.986069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.986289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.986323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.986581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.986614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.986817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.986849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.987121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.987153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.987427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.987462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.987757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.987789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.988001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.988034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.988249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.988285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.988472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.988504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.988785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.988817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.989099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.989131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.989363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.989398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.989673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.989706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.989903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.989935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.990211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.990245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.990532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.990565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.990837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.990869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.991021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.991053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.991264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.991297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.991483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.991516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.991786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.991819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.992103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.992136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.992413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.992447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.992733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.992764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.993051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.993083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.993369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 16:28:48.993402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 16:28:48.993632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.993665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.993811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.993843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.993975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.994007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.994188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.994231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.994487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.994521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.994728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.994760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.994910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.994949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.995062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.995094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.995276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.995311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.995573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.995605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.995855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.995887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.996188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.996247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.996459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.996492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.996765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.996798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.997018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.997051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.997269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.997326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.997586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.997618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.997803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.997835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.997959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.997991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.998190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.998235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.998554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.998588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.998786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.998819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.999016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.999048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.999268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.999302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.999496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.999528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.999670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.999702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:48.999928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:48.999961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.000266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.000301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.000561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.000593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.000780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.000812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.001089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.001122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.001398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.001434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.001630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.001664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.001866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.001899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.002177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.002218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.002401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.002434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.002579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.002609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.002882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.002915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.003126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.003158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.003464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.003499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.003757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.003789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.004091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.004124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.004314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.004349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.004629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.004662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.004930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.004962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.005261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 16:28:49.005296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 16:28:49.005547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.005580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.005834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.005866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.006150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.006182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.006465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.006499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.006703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.006736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.007014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.007046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.007346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.007380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.007647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.007680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.007881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.007913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.008167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.008199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.008478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.008511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.008724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.008756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.009024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.009057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.009256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.009291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.009495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.009528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.009750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.009782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.009986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.010019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.010326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.010359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.010643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.010675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.010958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.010990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.011247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.011281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.011581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.011614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.011889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.011922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.012109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.012142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.012431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.012464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.012767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.012799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.013031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.013064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.013285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.013332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.013554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.013586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.013837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.013870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.014132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.014165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.014456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.014489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.014762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.014794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.014940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.014971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.015191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.015247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.015459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.015491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.015768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.015801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.016056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.016089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.016346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.016380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.016597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.016628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.016830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.016862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.017104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.017136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.017415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.017450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.017733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.017765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.018017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.018049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.018254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.018289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.018481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.018513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.018715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 16:28:49.018748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 16:28:49.019023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.019055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.019364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.019397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.019658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.019691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.019966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.019998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.020254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.020289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.020546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.020578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.020872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.020905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.021179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.021229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.021506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.021543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.021814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.021847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.022142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.022174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.022448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.022482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.022616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.022649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.022921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.022953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.023162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.023194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.023488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.023521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.023720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.023753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.024056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.024089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.024310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.024345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.024488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.024528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.024810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.024842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.025140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.025172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.025440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.025475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.025733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.025766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.026071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.026103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.026328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.026361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.026639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.026672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.026962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.026994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.027123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.027154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.027447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.027481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.027686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.027718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.027901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.027933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.028222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.028257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.028553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.028586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.028854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.028886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.029182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.029227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.029488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.029521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.029724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.029756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.030026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.030058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.030340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.030374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.030657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.030689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.030946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.030979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.031233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.031268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.031485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.031517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.031776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.031809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.032068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.032101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.032361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.032395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.032695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.032727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 16:28:49.032992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 16:28:49.033024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.033227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.033261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.033455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.033487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.033761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.033793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.034072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.034104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.034392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.034426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.034670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.034702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.034973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.035006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.035285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.035318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.035628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.035660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.035887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.035919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.036056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.036095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.036301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.036334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.036558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.036590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.036775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.036807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.036987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.037020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.037299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.037333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.037619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.037652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.037934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.037966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.038251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.038286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.038569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.038601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.038802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.038834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.039063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.039095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.039352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.039385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.039595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.039627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.039876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.039908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.040220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.040253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.040456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.040489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.040683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.040715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.040994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.041026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.041298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.041331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.041599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.041632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.041922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.041955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.042255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.042290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.042491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.042524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.042791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.042823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 16:28:49.043084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 16:28:49.043117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.043336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.043371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.043574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.043607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.043813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.043845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.044099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.044132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.044433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.044467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.044735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.044766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.045067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.045100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.045391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.045426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.045626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.045658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.045865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.045897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.046220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.046254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.046533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.046565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.046818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.046850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.047133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.047166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.047451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.047490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.047763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.047796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.048081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.048115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.048425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.048460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.048666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.048698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.048976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.049009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.049294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.049328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.049614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.049647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.049924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.049957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.050222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.050257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.050473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.050507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.050812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.050845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.050976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.051008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.051258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.051293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.051552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.051585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.051886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.051919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.052228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.052262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.052552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.052586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.052858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.052890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.053190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.053236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.053438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.053471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.053724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.053757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.053946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.053979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.054189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.054233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.054434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.054467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.054718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.054750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.054957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.054990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.055268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.055302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.055454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.055486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.055737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.055769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.055991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 16:28:49.056022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 16:28:49.056222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.056256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.056507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.056540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.056756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.056788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.057052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.057085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.057385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.057418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.057683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.057715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.057921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.057953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.058211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.058244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.058533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.058564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.058834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.058872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.059095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.059127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.059263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.059297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.059549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.059582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.059863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.059896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.060101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.060132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.060331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.060366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.060589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.060622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.060897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.060930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.061174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.061217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.061501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.061535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.061805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.061838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.062057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.062089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.062371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.062406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.062613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.062645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.062898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.062931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.063259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.063294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.063580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.063612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.063891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.063923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.064123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.064156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.064366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.064400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.064685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.064718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.065000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.065031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.065291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.065325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.065523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.065556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.065836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.065867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.066049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.066082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.066359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.066393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.066674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.066705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.066993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.067026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.067326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.067361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.067632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.067664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 16:28:49.067878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 16:28:49.067909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.068112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.068145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.068430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.068463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.068743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.068776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.069063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.069095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.069373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.069408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.069605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.069638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.069842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.069874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.070075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.070118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.070396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.070429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.070727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.070758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.071030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.071062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.071354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.071388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.071663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.071695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.071996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.072029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.072300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.072334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.072652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.072684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.072917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.072950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.073148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.073180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.073395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.073428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.073704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.073736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.074040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.074072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.074341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.074375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.074661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.074694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.074969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.075001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.075341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.075375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.075650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.075682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.075909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.075943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.076197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.076240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.076536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.076570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.076834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.076867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.077190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.077232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.077512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.077545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.077843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.077877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.078143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.078175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.078443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.078477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.078773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.078805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.079079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.079112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.079342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.079377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.079669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.079702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.079930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.079962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.080141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.080174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.080454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.080532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.080809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 16:28:49.080847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 16:28:49.081049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.081083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.081345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.081381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.081589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.081624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.081877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.081910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.082104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.082148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.082420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.082454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.082644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.082678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.082960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.082992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.083268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.083302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.083596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.083630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.083848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.083880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.084082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.084114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.084317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.084352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.084552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.084585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.084838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.084871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.085176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.085220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.085500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.085534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.085808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.085840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.086075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.086108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.086309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.086344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.086617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.086650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.086846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.086877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.087085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.087118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.087396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.087429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.087712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.087744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.087961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.087993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.088312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.088346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.088629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.088662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.088939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.088971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.089170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.089213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.089410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.089443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.089727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.089760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.090015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.090048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.090324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.090359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.090616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.090649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.090837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.090869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.091125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.091157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.091448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 16:28:49.091481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 16:28:49.091775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.091807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.092088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.092121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.092372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.092406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.092683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.092716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.092862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.092895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.093167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.093199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.093486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.093525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.093798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.093831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.094028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.094060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.094258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.094293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.094570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.094603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.094927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.094959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.095089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.095122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.095307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.095340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.095615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.095647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.095945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.095977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.096172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.096222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.096423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.096456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.096731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.096763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.096951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.096983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.097256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.097291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.097586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.097619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.097823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.097854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.098034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.098066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.098259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.098293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.098490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.098523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.098822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.098854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.099081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.099114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.099331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.099365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.099581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.099614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.099886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.099918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.100128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.100161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.100423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.100457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.100735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.100813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.101118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.101155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.101467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.101503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.101774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.101807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.102098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.102131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.102282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.102317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.102616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.102649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.102935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.102969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.103199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.103248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.103448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.103481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.103738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.103772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.103969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.104002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.104283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.104317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.104513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.104546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.104765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.104799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.105022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.105055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 16:28:49.105238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 16:28:49.105272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.105527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.105560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.105756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.105789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.106048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.106082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.106279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.106314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.106527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.106560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.106842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.106875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.107159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.107192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.107473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.107507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.107734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.107767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.108071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.108105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.108371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.108410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.108621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.108655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.108909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.108942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.109165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.109198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.109393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.109427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.109638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.109671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.109946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.109979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.110212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.110247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.110532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.110565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.110777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.110809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.110946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.110980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.111275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.111309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.111586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.111619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.111849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.111883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.112167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.112200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.112514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.112549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.112828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.112862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.113077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.113110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.113394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.113430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.113711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.113744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.114023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.114056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.114262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.114296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.114570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.114603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.114882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.114915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.115131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.115163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.115425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.115460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.115756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.115790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.116084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.116122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.116391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.116426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.116761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.116795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.117072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.117105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.117391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.117426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.117705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.117738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.118021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.118054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.118340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.118373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.118631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.118664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.118917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 16:28:49.118950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 16:28:49.119211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.119245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.119502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.119536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.119741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.119773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.119906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.119938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.120247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.120282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.120565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.120598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.120867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.120901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.121218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.121253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.121500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.121533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.121820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.121853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.122077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.122111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.122392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.122427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.122708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.122741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.122857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.122890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.123089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.123121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.123396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.123438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.123715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.123748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.123952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.123990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.124244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.124279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.124580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.124613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.124896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.124929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.125159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.125192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.125421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.125456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.125676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.125709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.126011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.126043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.126236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.126271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.126456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.126490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.126765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.126798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.127071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.127105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.127380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.127415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.127701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.127734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.127996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.128030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.128332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.128366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.128610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.128644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.128914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.128948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.129170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.129211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.129490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.129523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.129822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.129855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.130121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.130153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.130449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.130484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.130754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.130788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.131075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.131108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 16:28:49.131226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 16:28:49.131261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.131543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.131577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.131878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.131911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.132123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.132157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.132455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.132490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.132690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.132724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.132907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.132940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.133229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.133264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.133544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.133577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.133856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.133889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.134176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.134215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.134413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.134445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.134702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.134735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.134990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.135023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.135324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.135358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.135626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.135659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.135849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.135883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.136160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.136193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.136421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.136455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.136601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.136634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.136913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.136945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.137131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.137164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.137452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.137487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.137754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.137787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.138087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.138121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.138308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.138342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.138600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.138632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.138862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.138894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.139151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.139184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 16:28:49.139482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 16:28:49.139516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.139786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.139819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.140107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.140141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.140437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.140472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.140675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.140708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.140894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.140927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.141224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.141258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.141480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.141513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.141741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.141774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.142040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.142072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.142375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.142411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.142672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.142706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.143001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.143035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.143308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.143342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.143627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.143666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.143945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.143978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.144122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.144155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.144357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.144393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.144675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.144708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.144910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.144944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.145209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.145243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.145520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.145554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.145834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.145866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.146124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.146157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.146448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.146482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.146760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.146793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.147078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.147111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.147258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.147293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.147580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.147614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.147797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.147831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.148112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.148145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.148447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.148481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.148743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.148777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.149024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.149057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.149362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.149395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.149691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.149724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.150012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.150045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.150322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.150356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.150640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.150673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.150957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.150989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.151191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.151234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 16:28:49.151531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 16:28:49.151569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.151895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.151929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.152227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.152261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.152527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.152561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.152838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.152871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.153155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.153187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.153404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.153438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.153742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.153775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.154059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.154092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.154374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.154409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.154595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.154628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.154888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.154921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.155106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.155139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.155449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.155482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.155792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.155826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.156084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.156117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.156400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.156433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.156715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.156749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.157033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.157066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.157348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.157382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.157668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.157702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.157983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.158016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.158243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.158277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.158555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.158588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.158774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.158807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.159100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.159133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.159391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.159425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.159620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.159653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.159885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.159918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.160189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.160240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.160494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.160528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.160821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.160854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.161070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.161103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.161383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.161418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.161669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.161701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.161976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.162009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 16:28:49.162191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 16:28:49.162234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.162497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.162530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.162734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.162767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.162950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.162983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.163267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.163300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.163512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.163546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.163742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.163775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.163972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.164004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.164187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.164236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.164510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.164543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.164744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.164776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.165029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.165062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.165360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.165394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.165647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.165680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.165961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.165993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.166273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.166306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.166587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.166620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.166901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.166934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.167225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.167260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.167475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.167508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.167717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.167750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.167955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.167988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.168219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.168252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.168511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.168544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.168847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.168880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.169139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.169171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.169416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.169450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.169677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.169709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.170000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.170032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.170306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.170340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.170575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.170612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.170818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.170852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.170989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.171027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.171296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.171330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.171458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.171492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.171674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.171707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.171987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.172020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.172221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.172255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.172508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.172541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.172749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.172782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.173050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 16:28:49.173082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 16:28:49.173364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.173397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.173689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.173729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.173955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.173988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.174274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.174307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.174564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.174596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.174855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.174888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.175184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.175234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.175455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.175489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.175743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.175775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.175998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.176031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.176307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.176342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.176629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.176662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.176860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.176892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.177190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.177231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.177512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.177544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.177821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.177854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.178148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.178181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.178450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.178483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.178707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.178745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.179000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.179034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.179182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.179235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.179494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.179527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.179808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.179840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.180094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.180127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.180407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.180442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.180726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.180759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.181016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.181048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.181199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.181241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.181496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.181529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.181810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.181843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.182124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.182157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.182438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.182472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.182695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.182728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.183047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.183079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.183284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.183317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.183596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.183630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.183810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.183843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.184050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.184083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.184368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.184403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.184678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.184711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.184999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.185032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 16:28:49.185234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 16:28:49.185268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.185550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.185583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.185886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.185918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.186124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.186157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.186471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.186511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.186731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.186764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.187024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.187057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.187360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.187394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.187681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.187715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.187918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.187951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.188228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.188262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.188499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.188533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.188807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.188840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.189093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.189126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.189323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.189358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.189641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.189674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.189955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.189987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.190199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.190243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.190505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.190539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.190740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.190773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.190993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.191026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.191222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.191255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.191441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.191474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.191729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.191762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.191962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.191994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.192177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.192221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.192480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.192513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.192791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.192824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.193077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 16:28:49.193110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 16:28:49.193314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.193349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.193535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.193567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.193847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.193881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.194215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.194250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.194507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.194540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.194746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.194779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.195004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.195038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.195319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.195353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.195638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.195672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.195859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.195892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.196091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.196124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.196408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.196443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.196648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.196681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.196884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.196918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.197197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.197243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.197520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.197554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.197828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.197861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.198076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.198109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.198367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.198401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.198550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.198581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.198719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.198750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.198951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.198982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.199249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.199280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.199570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.199601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.199904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.199935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.200215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.200248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.200479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.200510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.200759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.200790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.201020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.201052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.201329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.201359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.201650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.201681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.201961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.201991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.202279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.202311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.202595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.202627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.202896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.202926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.203224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.203256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.203538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.203570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 16:28:49.203828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 16:28:49.203860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.204058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.204089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.204309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.204341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.204645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.204676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.204959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.204990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.205241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.205294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.205500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.205538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.205833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.205863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.206066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.206097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.206357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.206389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.206645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.206676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.206942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.206973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.207276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.207308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.207598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.207629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.207847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.207879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.208079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.208110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.208367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.208400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.208603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.208634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.208866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.208899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.209156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.209188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.209499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.209533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.209838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.209871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.210150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.210183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.210408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.210442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.210646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.210678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.210949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.210981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.211233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.211268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.211463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.211496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.211702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.211734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.211994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.212027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.212319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.212353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.212641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.212674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.212918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.212951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.213239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.213278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 16:28:49.213550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 16:28:49.213584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.213838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.213870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.214053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.214086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.214370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.214404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.214672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.214704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.215001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.215033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.215306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.215341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.215604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.215638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.215839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.215872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.216081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.216114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.216379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.216413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.216717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.216749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.217010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.217044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.217239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.217274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.217480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.217512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.217794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.217827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.218129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.218161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.218462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.218496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.218624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.218657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.218885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.218917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.219144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.219177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.219436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.219470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.219723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.219755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.219958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.219991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.220186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.220231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.220422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.220455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.220734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.220766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.221070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.221102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.221371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.221406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.221593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.221626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.221833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.221866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.222012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.222044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.222347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.222381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.222667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.222699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.222951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.222985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.223292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.223326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 16:28:49.223589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 16:28:49.223622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.223923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.223955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.224222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.224255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.224439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.224471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.224672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.224706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.224977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.225010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.225292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.225326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.225615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.225647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.225926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.225959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.226148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.226180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.226445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.226478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.226624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.226656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.226859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.226892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.227194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.227253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.227460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.227494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.227775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.227817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.228096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.228151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.228410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.228451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.228665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.228700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.228978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.229010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.229221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.229256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.229538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.229571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.229781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.229813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.230043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.230079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.230347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.230400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.230621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.230656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.230857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.230890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.231168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.231213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.231491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.231524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.231719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.231752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.231983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.232020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 16:28:49.232241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 16:28:49.232300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 16:28:49.232614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.331 [2024-11-20 16:28:49.232651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.331 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 16:28:49.232930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.331 [2024-11-20 16:28:49.232963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.331 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 16:28:49.233225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.331 [2024-11-20 16:28:49.233259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.331 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 16:28:49.233463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.331 [2024-11-20 16:28:49.233496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.331 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 16:28:49.233696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.331 [2024-11-20 16:28:49.233728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.331 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 16:28:49.233952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.331 [2024-11-20 16:28:49.233984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.331 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 16:28:49.234257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.331 [2024-11-20 16:28:49.234292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.331 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 16:28:49.234499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.234531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.234725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.234758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.235031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.235063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.235346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.235381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.235614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.235647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.235947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.235979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.236279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.236314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.236564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.236595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.236849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.236881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.237086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.237118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.237398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.237432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.237584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.237616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.237811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.237842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.238139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.238171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.238442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.238477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.238672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.238704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.238901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.238934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.239250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.239284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.239571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.239603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.239855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.239894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.240192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.240238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.240539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.240572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.240776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.240807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.241051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.241084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.241370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.241404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.241683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.241715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.242000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.242032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.242318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.242352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.242558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.242590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.242795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.242827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.243010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.243042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.243268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.243302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.243582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.243614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.243899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.243933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.244222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.244255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.244531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.244565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.244847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.244879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.245160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.332 [2024-11-20 16:28:49.245193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 16:28:49.245473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.245506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.245730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.245762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.246068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.246101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.246402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.246436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.246690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.246723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.246942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.246975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.247249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.247283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.247569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.247601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.247888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.247927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.248142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.248174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.248438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.248471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.248697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.248730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.248981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.249013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.249282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.249315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.249537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.249569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.249795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.249828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.250100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.250131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.250413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.250448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.250731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.250763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.251077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.251110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.251366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.251402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.251597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.251629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.251906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.251939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.252224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.252259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.252465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.252498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.252777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.252809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.253116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.253149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.253433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.253466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.253668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.253701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.253977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.254009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.254292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.254326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.254555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.254587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.254733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.254764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.255071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.255104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.255370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.255403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.255704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.255738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.255973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.256007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 16:28:49.256286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.333 [2024-11-20 16:28:49.256320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.256605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.256637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.256827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.256858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.257073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.257104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.257290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.257323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.257601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.257633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.257828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.257861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.258123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.258155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.258371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.258406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.258650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.258683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.258891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.258923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.259187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.259233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.259543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.259575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.259813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.259845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.260096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.260128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.260380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.260414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.260672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.260705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.260830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.260861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.261136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.261169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.261368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.261401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.261655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.261687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.261939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.261970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.262230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.262265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.262570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.262601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.262732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.262764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.263042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.263074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.263383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.263418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.263695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.263727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.263978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.264010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.264325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.264360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.264612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.264644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.264952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.264984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.265250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.265285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.265483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.265514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.265696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.265727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.266003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.266036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.266320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.266353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.266633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.266665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.334 [2024-11-20 16:28:49.266951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.334 [2024-11-20 16:28:49.266983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.334 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.267264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.267305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.267511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.267543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.267821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.267853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.268169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.268211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.268430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.268463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.268713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.268746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.269000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.269032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.269252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.269286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.269418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.269451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.269723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.269754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.270051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.270084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.270279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.270313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.270594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.270626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.270876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.270907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.271181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.271232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.271510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.271543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.271795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.271827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.272107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.272140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.272307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.272341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.272541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.272575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.272850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.272882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.273157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.273189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.273435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.273468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.273754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.273787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.273985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.274017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.274272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.274306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.274487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.274521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.274653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.274696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.274897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.274928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.275238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.275271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.275457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.275489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.275690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.275721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.275856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.275889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.276083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.276115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.276404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.276438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.276744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.276776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.277046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.277078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.335 qpair failed and we were unable to recover it. 00:27:18.335 [2024-11-20 16:28:49.277233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.335 [2024-11-20 16:28:49.277267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.277491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.277524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.277727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.277759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.278012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.278044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.278331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.278365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.278650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.278683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.278936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.278968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.279246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.279280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.279477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.279509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.279768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.279800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.280107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.280139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.280402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.280434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.280717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.280750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.281028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.281059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.281264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.281298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.281575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.281607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.281889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.281920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.282227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.282262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.282546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.282580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.282783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.282815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.283076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.283108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.283311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.283344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.283621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.283653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.283853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.283885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.284139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.284171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.284482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.284515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.284697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.284729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.285009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.285041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.285330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.285373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.285586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.285620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.285875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.285908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.286124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.336 [2024-11-20 16:28:49.286157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.336 qpair failed and we were unable to recover it. 00:27:18.336 [2024-11-20 16:28:49.286426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.286461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.286753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.286786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.287018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.287051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.287261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.287296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.287582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.287621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.287878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.287925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.288237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.288280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.288505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.288540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.288769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.288802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.289108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.289141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.289409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.289443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.289729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.289762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.289974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.290007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.290271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.290305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.290449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.290482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.290703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.290736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.290924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.290957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.291220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.291255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.291459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.291493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.291767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.291800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.292104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.292136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.292321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.292355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.292659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.292692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.292895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.292928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.293211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.293245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.293575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.293609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.293896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.293936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.294215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.294249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.294530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.294564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.294709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.294742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.294890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.294923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.295120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.295152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.295475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.295511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.295781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.295815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.296083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.296116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.296395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.296431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.296716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.296749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.297024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.337 [2024-11-20 16:28:49.297057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.337 qpair failed and we were unable to recover it. 00:27:18.337 [2024-11-20 16:28:49.297319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.297353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.297655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.297688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.297973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.298006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.298288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.298322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.298477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.298508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.298692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.298725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.299004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.299037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.299315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.299349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.299502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.299534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.299687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.299720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.299981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.300013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.300292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.300327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.300611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.300644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.300851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.300884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.301140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.301172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.301381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.301421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.301686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.301718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.301987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.302019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.302314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.302347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.302617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.302649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.302974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.303007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.303262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.303296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.303583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.303617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.303897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.303929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.304071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.304104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.304379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.304414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.304697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.304729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.304955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.304988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.305268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.305302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.305590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.305623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.305834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.305866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.306070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.306103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.306361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.306395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.306645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.306678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.306954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.306986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.307246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.307281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.307497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.307530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.307660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.338 [2024-11-20 16:28:49.307693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.338 qpair failed and we were unable to recover it. 00:27:18.338 [2024-11-20 16:28:49.307987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.308020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.308292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.308327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.308619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.308651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.308854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.308887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.309066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.309104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.309305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.309340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.309614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.309647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.309850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.309883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.310047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.310078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.310335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.310369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.310604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.310636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.310789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.310823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.311103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.311136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.311276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.311310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.311584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.311616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.311749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.311782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.312063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.312097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.312249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.312284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.312433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.312467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.312697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.312733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.312961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.312995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.313271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.313305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.313561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.313594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.313797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.313829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.314109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.314141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.314451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.314487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.314742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.314776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.315029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.315062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.315195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.315249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.315435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.315468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.315772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.315805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.316055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.316088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.316359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.316393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.316634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.316667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.316810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.316843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.317046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.317078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.317265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.317298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.317557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.317590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.339 qpair failed and we were unable to recover it. 00:27:18.339 [2024-11-20 16:28:49.317789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.339 [2024-11-20 16:28:49.317822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.318099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.318132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.318410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.318445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.318737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.318770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.318967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.319000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.319180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.319248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.319395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.319427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.319632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.319665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.319944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.319977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.320260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.320294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.320578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.320611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.320844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.320877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.321020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.321053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.321308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.321342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.321593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.321628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.321884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.321920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.322173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.322217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.322491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.322524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.322778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.322810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.323111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.323145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.323398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.323432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.323662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.323696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.323991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.324024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.324325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.324359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.324494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.324527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.324675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.324709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.324919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.324952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.325180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.325222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.325494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.325528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.325809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.325841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.326080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.326114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.326267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.326305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.326534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.326567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.326845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.326879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.327116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.327157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.327480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.327519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.327675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.327708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.327911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.340 [2024-11-20 16:28:49.327945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.340 qpair failed and we were unable to recover it. 00:27:18.340 [2024-11-20 16:28:49.328159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.328194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.328348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.328381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.328661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.328694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.328907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.328941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.329187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.329234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.329425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.329460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.329673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.329706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.330014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.330048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.330258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.330293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.330576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.330609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.330779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.330812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.331074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.331106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.331389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.331423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.331568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.331600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.331854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.331888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.332088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.332120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.332416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.332451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.332762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.332795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.333067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.333100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.333317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.333351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.333557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.333590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.333848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.333881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.334149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.334182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.334412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.334450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.334707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.334739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.334954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.334987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.335183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.335229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.335508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.335540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.335763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.335797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.335988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.336021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.336227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.336262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.336472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.336505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.336744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.336778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.341 [2024-11-20 16:28:49.337058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.341 [2024-11-20 16:28:49.337092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.341 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.337294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.337328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.337552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.337585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.337845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.337878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.338160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.338192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.338477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.338510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.338657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.338689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.338918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.338951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.339232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.339265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.339523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.339555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.339755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.339788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.339920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.339953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.340161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.340194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.340355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.340388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.340643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.340676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.340957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.340991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.341266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.341301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.341457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.341490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.341697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.341729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.342056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.342089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.342365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.342398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.342608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.342641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.342865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.342898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.343099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.343132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.343329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.343363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.343618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.343651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.343844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.343876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.344066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.344100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.344379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.344413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.344677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.344710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.345011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.345044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.345298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.345332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.345523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.345556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.345838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.345871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.346136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.346168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.346381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.346416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.346624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.346657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.346990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.347022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.347303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.342 [2024-11-20 16:28:49.347337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.342 qpair failed and we were unable to recover it. 00:27:18.342 [2024-11-20 16:28:49.347590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.347624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.347880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.347912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.348133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.348166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.348375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.348410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.348610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.348642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.348852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.348885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.349093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.349127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.349344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.349378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.349585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.349618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.349815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.349848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.350125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.350157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.350477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.350511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.350705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.350738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.351017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.351050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.351334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.351367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.351598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.351630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.351931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.351964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.352231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.352265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.352391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.352423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.352606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.352644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.352903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.352937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.353193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.353239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.353514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.353547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.353755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.353788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.354043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.354075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.354303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.354337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.354612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.354645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.354907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.354940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.355159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.355192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.355405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.355438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.355656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.355689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.356016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.356048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.356330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.356363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.356569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.356602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.356876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.356908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.357106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.357139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.357271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.357304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.357584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.343 [2024-11-20 16:28:49.357617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.343 qpair failed and we were unable to recover it. 00:27:18.343 [2024-11-20 16:28:49.357816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.357847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.358102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.358135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.358444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.358479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.358773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.358805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.358942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.358975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.359276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.359311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.359563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.359595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.359912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.359945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.360227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.360267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.360544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.360577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.360774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.360807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.361073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.361106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.361330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.361363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.361637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.361669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.361896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.361930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.362222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.362256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.362526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.362559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.362689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.362722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.363014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.363047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.363325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.363360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.363614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.363646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.363851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.363884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.364178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.364220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.364435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.364468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.364770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.364804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.365047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.365080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.365286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.365319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.365578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.365611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.365872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.365905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.366215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.366249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.366533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.366567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.366705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.366738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.367026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.367059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.367355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.367390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.367663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.344 [2024-11-20 16:28:49.367696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.344 qpair failed and we were unable to recover it. 00:27:18.344 [2024-11-20 16:28:49.367970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.368009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.368312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.368345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.368591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.368625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.368942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.368975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.369257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.369291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.369487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.369520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.369800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.369832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.370084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.370118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.370428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.370463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.370658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.370690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.370883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.370916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.371188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.371234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.371444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.371475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.371755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.371788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.372042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.372074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.372332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.372366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.372644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.372677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.372960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.372993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.373174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.373217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.373474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.373507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.373712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.373745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.374043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.374076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.374258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.374292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.374484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.374517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.374824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.374856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.375139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.375172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.375411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.375445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.375730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.375763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.376063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.376095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.376362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.376397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.376597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.376629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.376895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.376928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.377230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.377264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.377536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.345 [2024-11-20 16:28:49.377569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.345 qpair failed and we were unable to recover it. 00:27:18.345 [2024-11-20 16:28:49.377755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.377788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.377993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.378026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.378259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.378292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.378449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.378482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.378767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.378801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.379063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.379096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.379403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.379437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.379705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.379739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.380001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.380034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.380308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.380343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.380628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.380661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.380941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.380974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.381233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.381266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.381564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.381597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.381799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.381832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.382112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.382146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.382288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.382322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.382593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.382626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.382933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.382965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.383163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.383196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.383390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.383425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.383719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.383752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.383958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.383991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.384187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.384241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.384521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.384554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.384819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.384852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.385153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.385186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.385481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.385514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.385788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.385821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.386109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.386141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.386421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.386456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.386597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.386630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.386910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.386943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.387244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.387279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.387544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.387589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.387732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.387764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.388068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.388101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.346 qpair failed and we were unable to recover it. 00:27:18.346 [2024-11-20 16:28:49.388336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.346 [2024-11-20 16:28:49.388370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.388614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.388647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.388950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.388983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.389263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.389297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.389494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.389527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.389711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.389744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.389945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.389978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.390218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.390251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.390556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.390589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.390873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.390907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.391110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.391143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.391359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.391394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.391652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.391685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.391974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.392007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.392289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.392324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.392589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.392622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.392831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.392864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.393121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.393154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.393446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.393481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.393758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.393791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.394023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.394056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.394335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.394369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.394566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.394599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.394855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.394888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.395167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.395214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.395519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.395551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.395771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.395804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.396084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.396118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.396390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.396425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.396689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.396722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.397005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.397038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.397317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.397351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.397634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.397667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.397955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.397987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.398266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.398300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.398520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.398554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.398752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.398784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.399087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.399120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.347 [2024-11-20 16:28:49.399394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.347 [2024-11-20 16:28:49.399429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.347 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.399585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.399618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.399874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.399907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.400221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.400255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.400534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.400568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.400838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.400870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.401167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.401200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.401494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.401528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.401800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.401833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.402125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.402158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.402432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.402467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.402695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.402728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.402935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.402968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.403175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.403217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.403424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.403457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.403735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.403768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.403999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.404032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.404250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.404285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.404566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.404600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.404856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.404889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.405092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.405125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.405312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.405347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.405543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.405576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.405854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.405887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.406101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.406134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.406417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.406452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.406735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.406768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.407047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.407080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.407293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.407327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.407600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.407633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.407768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.407802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.407987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.408020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.408234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.408268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.408464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.408496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.408699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.408732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.408914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.408947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.409226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.409259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.409519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.409553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.348 [2024-11-20 16:28:49.409853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.348 [2024-11-20 16:28:49.409886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.348 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.410109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.410141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.410333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.410368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.410655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.410688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.410909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.410941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.411223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.411258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.411483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.411516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.411709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.411742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.411969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.412001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.412305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.412340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.412551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.412584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.412865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.412898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.413183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.413225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.413506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.413543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.413818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.413851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.414010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.414043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.414349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.414389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.414653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.414686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.414944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.414976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.415260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.415294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.415509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.415542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.415797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.415830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.416135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.416168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.416456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.416491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.416766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.416799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.417005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.417037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.417292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.417326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.417586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.417619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.417923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.417956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.418164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.418197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.418495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.418528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.418780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.418814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.419125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.419158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.419465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.419500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.419750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.419783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.420060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.420093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.420304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.420337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.420571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.420604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.420885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.349 [2024-11-20 16:28:49.420917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.349 qpair failed and we were unable to recover it. 00:27:18.349 [2024-11-20 16:28:49.421200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.421246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.421452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.421486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.421686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.421719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.421997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.422030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.422316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.422358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.422651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.422684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.422900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.422933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.423237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.423271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.423531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.423564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.423770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.423803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.423987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.424021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.424223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.424256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.424476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.424509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.424721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.424754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.425039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.425072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.425354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.425389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.425673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.425707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.425936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.425969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.426253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.426287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.426541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.426573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.426835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.426868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.427171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.427214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.427494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.427527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.427723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.427756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.428016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.428048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.428347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.428380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.428661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.428694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.428977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.429010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.429220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.429254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.429554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.429587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.429845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.429879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.430023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.430062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.430342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.430377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.350 [2024-11-20 16:28:49.430674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.350 [2024-11-20 16:28:49.430709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.350 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.430975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.431008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.431282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.431316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.431444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.431478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.431734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.431766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.432049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.432082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.432314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.432348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.432615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.432649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.432931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.432964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.433115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.433147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.433378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.433413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.433616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.433649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.433896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.433972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.434253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.434292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.434571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.434606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.434882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.434915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.435215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.435248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.435456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.435489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.435791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.435824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.436021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.436054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.436259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.436292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.436589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.436621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.436849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.436881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.437083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.437115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.437321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.437359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.437639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.437682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.437828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.437860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.438090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.438122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.438403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.438437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.438718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.438751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.438951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.438984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.439290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.439324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.439474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.439507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.439785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.439817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.440017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.440050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.440327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.440361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.440564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.440597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.440725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.440758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.351 qpair failed and we were unable to recover it. 00:27:18.351 [2024-11-20 16:28:49.441037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.351 [2024-11-20 16:28:49.441069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.441227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.441262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.441445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.441479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.441703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.441736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.441933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.441966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.442171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.442211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.442351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.442384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.442672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.442706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.442983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.443016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.443324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.443356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.443631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.443664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.443865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.443897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.444116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.444149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.444359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.444391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.444721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.444800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.445052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.445089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.445388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.445424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.445664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.445698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.445920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.445953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.446233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.446268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.446466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.446499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.446774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.446806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.447062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.447095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.447295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.447330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.447606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.447637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.447896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.447930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.448119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.448152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.448459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.448494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.448758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.448792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.449096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.449130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.449391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.449426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.449637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.449670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.449897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.449930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.450157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.450190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.450476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.450509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.450765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.450798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.451100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.451133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.451415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.451450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.352 [2024-11-20 16:28:49.451706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.352 [2024-11-20 16:28:49.451739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.352 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.452036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.452070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.452318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.452354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.452580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.452618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.452894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.452927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.453230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.453264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.453469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.453501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.453755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.453788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.454007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.454040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.454305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.454340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.454623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.454655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.454949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.454982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.455256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.455291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.455582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.455614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.455834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.455867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.456171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.456212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.456422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.456456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.456746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.456779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.456987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.457020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.457298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.457333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.457612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.457645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.457931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.457963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.458241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.458275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.458572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.458604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.458829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.458861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.459051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.459083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.459362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.459396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.459599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.459631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.459893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.459925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.460242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.460277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.460558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.460598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.460811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.460843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.461137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.461169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.461442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.461475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.461776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.461807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.462079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.462111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.462406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.462440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.462716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.462748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.462953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.353 [2024-11-20 16:28:49.462986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.353 qpair failed and we were unable to recover it. 00:27:18.353 [2024-11-20 16:28:49.463193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.463238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.463493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.463525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.463828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.463860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.464061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.464094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.464346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.464381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.464670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.464703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.464979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.465012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.465304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.465338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.465589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.465622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.465817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.465850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.466135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.466167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.466317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.466350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.466555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.466586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.466792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.466826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.467041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.467073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.467352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.467386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.467594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.467627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.467934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.467965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.468160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.468200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.468491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.468523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.468663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.468695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.468883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.468916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.469232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.469265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.469466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.469499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.469779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.469810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.470053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.470086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.470366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.470400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.470591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.470624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.470827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.470859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.471137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.471169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.471529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.471563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.471821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.471853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.471996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.472028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.472309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.354 [2024-11-20 16:28:49.472344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.354 qpair failed and we were unable to recover it. 00:27:18.354 [2024-11-20 16:28:49.472606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.472639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.472912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.472945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.473141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.473175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.473400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.473435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.473735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.473768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.474059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.474091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.474372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.474406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.474691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.474723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.474881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.474913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.475218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.475251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.475533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.475566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.475772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.475805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.476099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.476132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.476350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.476384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.476595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.476628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.476768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.476801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.476951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.476984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.477239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.477273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.477466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.477499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.477697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.477729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.478008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.478041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.478296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.478330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.478639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.478672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.478922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.478955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.479224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.479258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.479478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.479512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.479793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.479826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.480034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.480067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.480265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.480299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.480557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.480590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.480789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.480822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.481124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.481157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.481429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.481463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.481729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.481762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.482017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.482050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.482351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.482385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.482585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.482618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.482903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.355 [2024-11-20 16:28:49.482936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.355 qpair failed and we were unable to recover it. 00:27:18.355 [2024-11-20 16:28:49.483222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.483256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.483489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.483522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.483709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.483742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.483971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.484003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.484313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.484347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.484605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.484637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.484939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.484973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.485240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.485276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.485532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.485564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.485774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.485807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.486075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.486107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.486402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.486437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.486626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.486659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.486926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.486958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.487164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.487212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.487445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.487479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.487668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.487699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.487979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.488013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.488221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.488256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.488392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.488425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.488562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.488595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.488845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.488878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.489156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.489188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.489438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.489472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.489678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.489711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.489991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.490024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.490313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.490348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.490631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.490665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.490942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.490975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.491173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.491214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.491356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.491389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.491597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.491630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.491915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.491948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.492132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.492165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.492374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.492408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.492614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.492647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.492906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.492938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.493124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.493157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-20 16:28:49.493446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.356 [2024-11-20 16:28:49.493481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.493665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.493697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.493907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.493939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.494227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.494268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.494488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.494522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.494747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.494780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.495062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.495095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.495351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.495386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.495657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.495690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.495974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.496006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.496194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.496236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.496469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.496501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.496689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.496722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.496995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.497028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.497256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.497291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.497558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.497591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.497722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.497755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.498038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.498072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.498355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.498389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.498673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.498705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.498962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.498995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.499199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.499245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.499384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.499416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.499671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.499703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.499896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.499930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.500222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.500257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.500440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.500474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.500656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.500690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.500900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.500933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.501119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.501151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.501398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.501433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.501657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.501690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.501889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.501922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.502185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.502245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.502388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.502421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.502559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.502592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.502811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.502843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.503105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.503139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.503377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.357 [2024-11-20 16:28:49.503413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-20 16:28:49.503667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.503700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.503922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.503955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.504155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.504189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.504345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.504379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.504613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.504645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.504707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148faf0 (9): Bad file descriptor 00:27:18.358 [2024-11-20 16:28:49.505019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.505097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.505253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.505293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.505580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.505613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.505748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.505781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.505918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.505951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.506257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.506292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.506447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.506480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.506685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.506720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.506850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.506883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.507092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.507126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.507246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.507281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.507481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.507514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.507725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.507757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.507974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.508008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.508223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.508257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.508532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.508566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.508753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.508786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.509004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.509037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.509163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.509197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.509418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.509452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.509594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.509627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.509834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.509867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.510145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.510178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.510417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.510451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.510672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.510706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.510901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.510934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.511226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.511269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.511468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.511501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.511630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.511667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.511889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.511923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.512069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.512103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.512303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.512338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-20 16:28:49.512601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.358 [2024-11-20 16:28:49.512634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.512764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.512798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.513021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.513054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.513282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.513318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.513534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.513567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.513791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.513825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.514028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.514060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.514267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.514301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.514549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.514582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.514883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.514915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.515190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.515233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.515516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.515550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.515751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.515783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.516045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.516077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.516383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.516416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.516604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.516637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.516783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.516815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.517016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.517048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.517245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.517294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.517428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.517461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.517653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.517685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.517896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.517930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.518047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.518080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.518276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.518310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.518564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.518596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.518753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.518786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.518965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.518997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.519119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.519152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.519363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.519396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.519656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.519689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.519826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.519859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.520038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.520071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.520210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.520245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.520385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.359 [2024-11-20 16:28:49.520417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.359 qpair failed and we were unable to recover it. 00:27:18.359 [2024-11-20 16:28:49.520622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.520659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.520796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.520828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.521013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.521046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.521254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.521287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.521512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.521545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.521843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.521876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.522003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.522035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.522312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.522345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.522630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.522663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.522889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.522921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.523106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.523138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.523438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.523471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.523676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.523708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.523961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.523994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.524271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.524306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.524589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.524622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.524933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.524966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.525227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.525261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.525477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.525510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.525639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.525672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.525857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.525889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.526141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.526173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.526441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.526475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.526759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.526796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.527097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.527129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.527390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.527425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.527617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.527649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.527922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.527955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.528152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.528184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.528453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.528486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.528702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.528734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.528961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.528992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.529246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.529280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.529434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.529466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.529651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.529682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.529935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.529968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.530099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.530132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.360 [2024-11-20 16:28:49.530342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.360 [2024-11-20 16:28:49.530375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.360 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.530677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.530710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.530978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.531010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.531262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.531303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.531584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.531616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.531746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.531779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.532031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.532064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.532373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.532406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.532654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.532687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.532965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.532997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.533279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.533313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.533520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.533553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.533836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.533868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.534150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.534183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.534477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.534511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.534784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.534817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.535119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.535152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.535423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.535457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.535649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.535682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.535969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.536001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.536269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.536304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.536516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.536548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.536745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.536778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.536975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.537006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.537286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.537320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.537596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.537628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.537921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.537952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.538175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.538217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.538475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.538507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.538798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.538830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.539126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.539160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.539467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.539501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.539702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.539736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.540033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.540065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.540321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.540355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.540609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.540642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.540824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.540856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.541133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.541166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.361 qpair failed and we were unable to recover it. 00:27:18.361 [2024-11-20 16:28:49.541388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.361 [2024-11-20 16:28:49.541421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.541651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.541683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.541872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.541904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.542219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.542252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.542555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.542587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.542791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.542830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.542978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.543010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.543212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.543245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.543501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.543533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.543738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.543771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.544001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.544033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.544177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.544220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.544503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.544535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.544804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.544836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.545052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.545084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.362 [2024-11-20 16:28:49.545405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.362 [2024-11-20 16:28:49.545439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.362 qpair failed and we were unable to recover it. 00:27:18.639 [2024-11-20 16:28:49.545719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.639 [2024-11-20 16:28:49.545751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.639 qpair failed and we were unable to recover it. 00:27:18.639 [2024-11-20 16:28:49.545947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.639 [2024-11-20 16:28:49.545980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.639 qpair failed and we were unable to recover it. 00:27:18.639 [2024-11-20 16:28:49.546243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.639 [2024-11-20 16:28:49.546277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.639 qpair failed and we were unable to recover it. 00:27:18.639 [2024-11-20 16:28:49.546488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.639 [2024-11-20 16:28:49.546521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.639 qpair failed and we were unable to recover it. 00:27:18.639 [2024-11-20 16:28:49.546748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.639 [2024-11-20 16:28:49.546780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.639 qpair failed and we were unable to recover it. 00:27:18.639 [2024-11-20 16:28:49.546978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.639 [2024-11-20 16:28:49.547010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.639 qpair failed and we were unable to recover it. 00:27:18.639 [2024-11-20 16:28:49.547268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.547301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.547488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.547521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.547721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.547754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.547947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.547979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.548230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.548263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.548499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.548531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.548813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.548845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.549062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.549095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.549361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.549395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.549679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.549712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.549991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.550024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.550307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.550341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.550558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.550590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.550786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.550818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.551107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.551139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.551365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.551398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.551629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.551661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.551881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.551914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.552117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.552150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.552352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.552386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.552708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.552741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.553003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.553035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.553341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.553377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.553582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.553621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.553919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.553952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.554235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.554269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.554476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.554508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.554763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.554795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.555102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.555135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.555442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.555476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.555742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.555776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-20 16:28:49.556045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.640 [2024-11-20 16:28:49.556077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.556359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.556392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.556597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.556629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.556912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.556945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.557199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.557240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.557391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.557425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.557625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.557657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.557865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.557897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.558173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.558213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.558410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.558442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.558638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.558670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.558928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.558960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.559270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.559305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.559512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.559544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.559848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.559880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.560081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.560114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.560372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.560407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.560602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.560635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.560921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.560954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.561265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.561299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.561547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.561581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.561708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.561740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.561986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.562019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.562227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.562260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.562460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.562493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.562705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.562737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.562958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.562991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.563272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.563307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.563568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.563602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-20 16:28:49.563901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.641 [2024-11-20 16:28:49.563933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.564223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.564258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.564513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.564545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.564822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.564860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.565152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.565185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.565457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.565490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.565744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.565776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.566032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.566065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.566332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.566365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.566573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.566607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.566763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.566796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.567080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.567112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.567397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.567433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.567631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.567664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.567965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.567997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.568287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.568321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.568518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.568551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.568832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.568865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.569125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.569157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.569466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.569500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.569759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.569792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.570072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.570105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.570305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.570340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.570546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.570578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.570806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.570839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.571097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.571130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.571424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.571459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.571679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.571712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.571892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.571924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.572130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.572163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-20 16:28:49.572474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.642 [2024-11-20 16:28:49.572508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.572719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.572753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.573038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.573070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.573351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.573386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.573644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.573676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.573816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.573849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.574036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.574068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.574281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.574314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.574592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.574626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.574911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.574944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.575226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.575260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.575541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.575574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.575795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.575827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.576023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.576062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.576339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.576373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.576654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.576686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.576974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.577007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.577287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.577321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.577531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.577564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.577819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.577851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.578130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.578162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.578451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.578486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.578703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.578735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.579036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.579069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.579277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.579310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.579589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.579621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.579907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.579940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.580222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.580257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.580444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.580477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.580625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.580657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.580911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.580944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.581241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.581278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.581563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.581595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.581894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.581927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.582074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.643 [2024-11-20 16:28:49.582107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-20 16:28:49.582391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.582425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.582654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.582687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.583012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.583044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.583268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.583304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.583584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.583617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.583902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.583935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.584222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.584255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.584558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.584591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.584853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.584885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.585117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.585150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.585416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.585450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.585658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.585691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.585961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.585994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.586278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.586311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.586519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.586553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.586754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.586787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.586978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.587010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.587293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.587328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.587534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.587573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.587806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.587839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.588033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.588066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.588348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.588382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.588688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.588722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.588837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.588869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.589177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.589217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.589453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.589486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.589782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.589815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.590024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.590057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.590358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.590391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.590589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.590622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.590881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.590914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.644 qpair failed and we were unable to recover it. 00:27:18.644 [2024-11-20 16:28:49.591058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.644 [2024-11-20 16:28:49.591091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.591376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.591410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.591689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.591721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.592005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.592039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.592233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.592267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.592378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.592410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.592665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.592697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.592919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.592951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.593215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.593249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.593545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.593577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.593874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.593907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.594129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.594162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.594456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.594490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.594685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.594718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.594990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.595024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.595226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.595261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.595524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.595556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.595837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.595869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.596053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.596086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.596352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.596387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.596669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.596702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.596981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.597014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.597211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.597245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.645 [2024-11-20 16:28:49.597517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.645 [2024-11-20 16:28:49.597550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.645 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.597755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.597788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.598053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.598085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.598344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.598378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.598679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.598719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.599000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.599033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.599314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.599348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.599630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.599662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.599940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.599972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.600114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.600146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.600417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.600451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.600737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.600770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.601052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.601085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.601226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.601259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.601511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.601543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.601750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.601783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.602058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.602090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.602377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.602412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.602691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.602724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.602870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.602903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.603101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.603134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.603415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.603449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.603706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.603738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.603941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.603974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.604246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.604280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.604558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.604591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.604812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.604845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.605103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.605136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.605333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.605368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.605502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.605535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-11-20 16:28:49.605814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.646 [2024-11-20 16:28:49.605846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.606136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.606169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.606385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.606420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.606677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.606709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.607003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.607035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.607325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.607359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.607545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.607579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.607870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.607902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.608083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.608116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.608383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.608418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.608703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.608736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.608936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.608968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.609225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.609260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.609457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.609491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.609699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.609731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.609921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.609955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.610216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.610251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.610532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.610563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.610794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.610826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.611100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.611133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.611415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.611449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.611733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.611766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.612048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.612080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.612310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.612344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.612547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.612579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.612833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.612865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.613164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.613196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.613471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.613504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.613790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.613823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.614032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.614064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.614247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.614282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.614568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.647 [2024-11-20 16:28:49.614601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-11-20 16:28:49.614875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.614908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.615124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.615156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.615418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.615451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.615602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.615634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.615838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.615870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.616166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.616198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.616469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.616502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.616796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.616828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.617104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.617136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.617406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.617446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.617752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.617785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.618052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.618085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.618385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.618420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.618707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.618740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.619014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.619045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.619343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.619378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.619649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.619682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.619969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.620002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.620284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.620318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.620552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.620585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.620884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.620917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.621191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.621247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.621535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.621569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.621804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.621837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.622098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.622130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.622434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.622469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.622731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.622763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.623083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.623115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.623395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.623430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.623714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.623747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.623935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.648 [2024-11-20 16:28:49.623968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-11-20 16:28:49.624239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.624273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.624555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.624588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.624738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.624770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.625044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.625077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.625382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.625416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.625630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.625663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.625932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.625965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.626246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.626280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.626509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.626542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.626747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.626780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.627055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.627088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.627394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.627429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.627708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.627740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.627944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.627977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.628259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.628293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.628529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.628562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.628837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.628870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.629161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.629194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.629503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.629542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.629798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.629831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.630128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.630160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.630432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.630467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.630747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.630779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.630975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.631007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.631264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.631299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.631593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.631625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.631898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.631931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.632224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.632258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.632532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.632565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.632775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.632807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.633003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.633036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.633242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.649 [2024-11-20 16:28:49.633276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.649 qpair failed and we were unable to recover it. 00:27:18.649 [2024-11-20 16:28:49.633509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.633543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.633676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.633708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.633986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.634018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.634222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.634256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.634441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.634474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.634748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.634781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.635111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.635144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.635445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.635479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.635692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.635725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.636005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.636037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.636320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.636355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.636636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.636668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.636925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.636957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.637267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.637301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.637505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.637538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.637810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.637843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.638150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.638183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.638414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.638449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.638725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.638757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.638975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.639007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.639286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.639321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.639526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.639557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.639757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.639790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.640068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.640101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.640363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.640398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.640677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.640710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.640905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.640943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.650 qpair failed and we were unable to recover it. 00:27:18.650 [2024-11-20 16:28:49.641073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.650 [2024-11-20 16:28:49.641105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.641290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.641324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.641607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.641639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.641790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.641822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.642121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.642153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.642450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.642485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.642748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.642780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.643085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.643118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.643303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.643337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.643610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.643642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.643911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.643943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.644241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.644276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.644523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.644555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.644827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.644860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.645059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.645091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.645273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.645306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.645513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.645546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.645675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.645707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.645980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.646013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.646318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.646352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.646616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.646648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.646851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.646883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.647154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.647187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.647505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.647538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.647793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.647825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.648046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.648078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.648371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.648406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.648673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.651 [2024-11-20 16:28:49.648706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.651 qpair failed and we were unable to recover it. 00:27:18.651 [2024-11-20 16:28:49.648972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.649004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.649225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.649260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.649539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.649572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.649794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.649825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.650015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.650047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.650276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.650310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.650564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.650597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.650857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.650889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.651166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.651198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.651492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.651525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.651798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.651830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.652108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.652147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.652364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.652398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.652656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.652689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.652966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.652998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.653198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.653243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.653473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.653506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.653705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.653738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.654014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.654046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.654230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.654264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.654542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.654574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.654718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.654750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.655013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.655046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.655314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.655349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.655610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.655643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.655916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.655949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.656246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.656280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.656476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.656509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.656736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.656769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.657072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.657104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.657372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.657405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.657689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.657722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.658008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.652 [2024-11-20 16:28:49.658040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.652 qpair failed and we were unable to recover it. 00:27:18.652 [2024-11-20 16:28:49.658320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.658355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.658548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.658580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.658835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.658868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.659008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.659041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.659333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.659366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.659662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.659695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.659960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.659992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.660268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.660302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.660446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.660479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.660754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.660787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.661086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.661118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.661386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.661420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.661708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.661741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.662021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.662053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.662332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.662366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.662556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.662589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.662852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.662884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.663090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.663123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.663319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.663359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.663611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.663644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.663898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.663930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.664151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.664184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.664468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.664501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.664782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.664814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.665078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.665111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.665313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.665348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.665625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.665657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.665854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.665887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.666145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.666179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.666455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.666487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.666710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.666742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.666973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.667007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.667222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.653 [2024-11-20 16:28:49.667257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.653 qpair failed and we were unable to recover it. 00:27:18.653 [2024-11-20 16:28:49.667476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.667509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.667812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.667845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.668107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.668139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.668388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.668422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.668673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.668705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.668851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.668883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.669159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.669192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.669505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.669539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.669791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.669824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.670134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.670167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.670406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.670439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.670744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.670776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.670931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.670963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.671165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.671198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.671490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.671523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.671796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.671829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.672051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.672084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.672391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.672425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.672616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.672648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.672946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.672978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.673280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.673314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.673537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.673569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.673794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.673826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.674004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.674037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.674226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.674259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.674517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.674556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.674859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.674892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.675153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.675185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.675401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.675434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.675637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.675670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.675948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.675981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.654 [2024-11-20 16:28:49.676285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.654 [2024-11-20 16:28:49.676319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.654 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.676585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.676618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.676917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.676950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.677222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.677257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.677544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.677577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.677850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.677881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.678172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.678213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.678405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.678438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.678741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.678774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.678976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.679008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.679212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.679245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.679532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.679565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.679827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.679859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.680011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.680043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.680298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.680333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.680628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.680660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.680930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.680963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.681245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.681279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.681538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.681570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.681756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.681788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.682068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.682101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.682388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.682422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.682646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.682680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.682879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.682913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.683223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.683257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.683457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.683490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.683766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.683798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.684114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.684147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.684473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.684507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.684789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.684821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.685105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.685137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.655 [2024-11-20 16:28:49.685362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.655 [2024-11-20 16:28:49.685397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.655 qpair failed and we were unable to recover it. 00:27:18.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2080335 Killed "${NVMF_APP[@]}" "$@" 00:27:18.656 [2024-11-20 16:28:49.685674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.685707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.685999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.686032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.686251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.686286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:18.656 [2024-11-20 16:28:49.686551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.686586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:18.656 [2024-11-20 16:28:49.686843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.686877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:18.656 [2024-11-20 16:28:49.687180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.687224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.656 [2024-11-20 16:28:49.687502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.687536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.656 [2024-11-20 16:28:49.687740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.687774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.688080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.688114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.688430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.688465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.688703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.688735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.689011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.689043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.689300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.689335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.689628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.689665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.689952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.689985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.690263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.690303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.690607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.690639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.690897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.690931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.691225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.691260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.691471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.691506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.691793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.691825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.692046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.692081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.692363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.692398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.656 [2024-11-20 16:28:49.692609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.656 [2024-11-20 16:28:49.692642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.656 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.692905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.692938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.693129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.693162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.693332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.693378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.693658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.693691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.693981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.694015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.694281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.694316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.694542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.694575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.694855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.694889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.695031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.695064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.695281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.695317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2081052 00:27:18.657 [2024-11-20 16:28:49.695524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.695560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.695709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2081052 00:27:18.657 [2024-11-20 16:28:49.695745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:18.657 [2024-11-20 16:28:49.695965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.695999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2081052 ']' 00:27:18.657 [2024-11-20 16:28:49.696278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.696315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.657 [2024-11-20 16:28:49.696524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.696562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.696771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.696805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.697022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.697055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.657 [2024-11-20 16:28:49.697255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.697295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.657 [2024-11-20 16:28:49.697508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.697544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.657 [2024-11-20 16:28:49.697806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.697842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.698054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.698087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.698256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.698291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.698520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.698554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.698700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.698734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.698853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.698893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.657 qpair failed and we were unable to recover it. 00:27:18.657 [2024-11-20 16:28:49.699105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.657 [2024-11-20 16:28:49.699138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.699461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.699496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.699716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.699753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.700022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.700056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.700276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.700311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.700552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.700585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.700871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.700904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.701063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.701096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.701258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.701295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.701524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.701559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.701688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.701722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.702042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.702075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.702368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.702403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.702557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.702590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.702899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.702934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.703121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.703154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.703306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.703341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.703620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.703653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.703959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.703992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.704135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.704168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.704313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.704349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.704505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.704538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.704726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.704759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.704947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.704981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.705276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.705311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.705519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.705552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.705778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.705812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.706003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.706039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.706348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.706383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.706571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.706605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.706801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.706835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.707050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.707084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.707389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.658 [2024-11-20 16:28:49.707426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.658 qpair failed and we were unable to recover it. 00:27:18.658 [2024-11-20 16:28:49.707574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.707608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.707899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.707933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.708134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.708169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.708359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.708396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.708671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.708705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.708830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.708863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.709141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.709182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.709351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.709386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.709549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.709582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.709798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.709831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.710037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.710069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.710334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.710377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.710614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.710661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.710902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.710946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.711250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.711305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.711554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.711606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.711875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.711927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.712243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.712302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.712504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.712543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.712800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.712848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.713162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.713199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.713374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.713410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.713637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.713671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.713937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.713970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.714230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.714265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.714453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.714487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.714774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.714807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.715024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.715057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.715266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.715302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.715566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.715599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.715747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.715779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.659 [2024-11-20 16:28:49.715988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.659 [2024-11-20 16:28:49.716021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.659 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.716214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.716250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.716393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.716430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.716642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.716692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.716966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.717017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.717267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.717321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.717489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.717530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.717837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.717877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.718160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.718193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.718428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.718462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.718615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.718649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.718773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.718806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.719024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.719056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.719192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.719247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.719380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.719414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.719645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.719686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.719824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.719858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.720066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.720100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.720250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.720284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.720414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.720448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.720667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.720699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.720896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.720929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.721099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.721145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.721338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.721389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.721541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.721591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.721805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.721849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.722021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.722056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.722267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.722303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.722502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.722536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.722663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.722698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.722918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.722951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.660 qpair failed and we were unable to recover it. 00:27:18.660 [2024-11-20 16:28:49.723169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.660 [2024-11-20 16:28:49.723229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.723525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.723574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.723756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.723808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.724029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.724071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.724300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.724346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.724610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.724649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.724873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.724907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.725163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.725196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.725413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.725447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.725560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.725592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.725858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.725891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.726108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.726141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.726294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.726331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.726463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.726495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.726621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.726654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.726874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.726907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.727114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.727146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.727309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.727345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.727486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.727519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.727677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.727716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.727912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.727959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.728175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.728241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.728490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.728540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.728721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.728765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.729003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.729051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.729251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.729287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.729477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.729509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.729772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.729804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.730077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.730110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.661 qpair failed and we were unable to recover it. 00:27:18.661 [2024-11-20 16:28:49.730392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.661 [2024-11-20 16:28:49.730428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.730637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.730671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.730865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.730900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.731091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.731124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.731256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.731292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.731434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.731467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.731601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.731634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.731749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.731784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.731989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.732022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.732228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.732268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.732487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.732537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.732763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.732809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.733146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.733185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.733452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.733487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.733684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.733717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.733946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.733983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.734191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.734246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.734549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.734599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.734847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.734900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.735126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.735173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.735440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.735478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.735720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.735755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.735909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.735943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.736070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.736103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.736256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.736292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.736579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.736612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.736752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.736785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.736978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.737010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.737155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.737188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.737545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.737581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.662 [2024-11-20 16:28:49.737703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.662 [2024-11-20 16:28:49.737735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.662 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.737865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.737897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.738096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.738128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.738252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.738288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.738490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.738523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.738824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.738882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.739184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.739248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.739558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.739607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.739827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.739863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.740117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.740150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.740455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.740492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.740795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.740829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.741010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.741043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.741254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.741290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.741437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.741470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.741654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.741687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.741839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.741872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.742013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.742045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.742169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.742214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.742496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.742530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.742718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.742752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.743046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.743095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.743253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.743303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.743542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.743587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.743892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.743929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.744197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.744247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.744512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.744548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.744746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.744779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.744923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.744969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.745268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.745319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.663 qpair failed and we were unable to recover it. 00:27:18.663 [2024-11-20 16:28:49.745608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.663 [2024-11-20 16:28:49.745660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.745914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.745961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.746130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.746179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.746511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.746549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.746659] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:27:18.664 [2024-11-20 16:28:49.746672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.746709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 [2024-11-20 16:28:49.746719] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.746929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.746963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.747073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.747104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.747376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.747413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.747669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.747703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.747852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.747886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.748034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.748067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.748249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.748287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.748484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.748523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.748801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.748844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.749114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.749155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.749308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.749346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.749560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.749598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.749898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.749949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.750171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.750255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.750570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.750620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.750791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.750839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.751143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.751197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.751520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.751575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.751849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.751901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.752138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.752189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.752502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.752556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.752718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.752769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.753085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.753137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.753435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.664 [2024-11-20 16:28:49.753490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.664 qpair failed and we were unable to recover it. 00:27:18.664 [2024-11-20 16:28:49.753711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.753764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.753934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.753986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.754248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.754294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.754581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.754634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.754988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.755042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.755261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.755315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.755554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.755597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.755824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.755876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.756166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.756265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.756419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.756458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.756645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.756680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.756818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.756852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.757065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.757099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.757282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.757317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.757547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.757580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.757781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.757822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.758005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.758037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.758249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.758285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.758509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.758543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.758739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.758771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.758990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.759023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.759183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.759227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.759432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.759465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.759680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.759713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.759899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.759933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.760149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.760182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.760415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.760449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.760652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.760686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.760962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.760995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.761271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.761306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.665 [2024-11-20 16:28:49.761502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.665 [2024-11-20 16:28:49.761536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.665 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.761663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.761696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.761981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.762014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.762137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.762172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.762420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.762454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.762703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.762736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.762873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.762907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.763026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.763059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.763191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.763234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.763492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.763524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.763654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.763687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.763808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.763841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.764114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.764146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.764432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.764468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.764652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.764685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.764900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.764935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.765134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.765167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.765364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.765399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.765582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.765620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.765818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.765851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.765973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.766006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.766227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.766262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.766387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.766428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.766559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.766593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.766788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.766820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.766999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.767033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.767169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.767216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.767345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.767378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.767577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.767610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.767866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.767899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.768098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.666 [2024-11-20 16:28:49.768131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.666 qpair failed and we were unable to recover it. 00:27:18.666 [2024-11-20 16:28:49.768383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.768418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.768607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.768641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.768761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.768794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.768976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.769010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.769164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.769197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.769335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.769384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.769600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.769633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.769834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.769866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.769985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.770036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.770308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.770342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.770461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.770493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.770629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.770661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.770916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.770949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.771145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.771177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.771387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.771422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.771615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.771648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.771923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.771957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.772167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.772210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.772416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.772449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.772595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.772627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.772747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.772779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.772919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.772952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.773223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.773257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.667 [2024-11-20 16:28:49.773381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.667 [2024-11-20 16:28:49.773414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.667 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.773663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.773695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.773973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.774005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.774196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.774242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.774453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.774485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.774618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.774649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.774865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.774898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.775109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.775142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.775376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.775410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.775603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.775643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.775868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.775902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.776179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.776226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.776511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.776545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.776736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.776770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.776965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.776997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.777257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.777292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.777487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.777521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.777647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.777681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.777874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.777907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.778182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.778225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.778507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.778539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.778791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.778823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.778958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.778991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.779120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.779152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.779311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.779344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.779548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.779581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.779780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.779813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.780025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.780057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.780271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.780305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.780503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.780535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.780748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.780782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.668 [2024-11-20 16:28:49.780965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.668 [2024-11-20 16:28:49.780997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.668 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.781212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.781245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.781444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.781477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.781745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.781776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.781959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.781991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.782248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.782287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.782490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.782523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.782747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.782780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.782990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.783023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.783146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.783179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.783446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.783479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.783729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.783762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.783940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.783971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.784089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.784130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.784330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.784364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.784552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.784584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.784708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.784741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.784851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.784883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.785092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.785124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.785323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.785357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.785469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.785500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.785623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.785655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.785925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.785958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.786158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.786190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.786468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.786501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.786804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.786838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.787092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.787124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.787302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.787337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.787532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.787566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.787667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.787699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.669 qpair failed and we were unable to recover it. 00:27:18.669 [2024-11-20 16:28:49.787888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.669 [2024-11-20 16:28:49.787919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.788113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.788145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.788426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.788458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.788651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.788684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.788875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.788907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.789119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.789151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.789374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.789407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.789620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.789653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.789760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.789792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.789970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.790003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.790215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.790252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.790375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.790406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.790697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.790730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.790885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.790917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.791107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.791139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.791418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.791452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.791697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.791770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.792007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.792044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.792341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.792378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.792591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.792625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.792832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.792864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.793008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.793042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.793171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.793217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.793499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.793533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.793721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.793753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.794023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.794056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.794256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.794291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.794480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.794513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.794712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.670 [2024-11-20 16:28:49.794745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.670 qpair failed and we were unable to recover it. 00:27:18.670 [2024-11-20 16:28:49.794873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.794906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.795111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.795144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.795281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.795317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.795534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.795567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.795713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.795745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.795873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.795906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.796138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.796171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.796366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.796401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.796511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.796544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.796759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.796792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.796935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.796967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.797162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.797194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.797464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.797498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.797685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.797717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.797855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.797889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.798018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.798051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.798164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.798197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.798431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.798464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.798589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.798622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.798814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.798846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.798959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.798992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.799113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.799144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.799338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.799373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.799541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.799573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.799762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.799794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.799986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.800018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.800227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.800262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.800545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.800585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.800835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.800868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.800996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.801029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.801230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.801266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.801387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.801419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.671 [2024-11-20 16:28:49.801608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.671 [2024-11-20 16:28:49.801642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.671 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.801818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.801850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.802041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.802075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.802285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.802320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.802511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.802544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.802674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.802709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.802959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.802992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.803170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.803217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.803415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.803448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.803634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.803667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.803855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.803887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.804149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.804182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.804379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.804415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.804554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.804586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.804826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.804860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.804976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.805009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.805132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.805165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.805380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.805415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.805530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.805563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.805684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.805716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.805829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.805862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.805998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.806030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.806285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.806322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.806442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.806474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.806725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.806758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.806889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.806923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.807116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.807148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.807352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.807387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.807529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.807563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.807753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.807787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.807924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.672 [2024-11-20 16:28:49.807957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.672 qpair failed and we were unable to recover it. 00:27:18.672 [2024-11-20 16:28:49.808149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.808182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.808376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.808412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.808618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.808650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.808860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.808893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.809077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.809115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.809314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.809349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.809553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.809585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.809721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.809754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.809930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.809964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.810097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.810147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.810338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.810373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.810489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.810521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.810701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.810733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.810849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.810881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.811127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.811160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.811414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.811450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.811578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.811611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.811796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.811828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.812127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.812161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.812377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.812411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.812654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.812686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.812806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.812839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.813026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.813057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.813239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.813275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.813472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.813504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.813688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.813721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.673 [2024-11-20 16:28:49.813903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.673 [2024-11-20 16:28:49.813935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.673 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.814054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.814087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.814219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.814254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.814448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.814481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.814688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.814720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.814850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.814884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.815014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.815046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.815170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.815214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.815435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.815468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.815611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.815644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.815765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.815797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.815905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.815937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.816108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.816141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.816280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.816315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.816498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.816530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.816706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.816739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.816872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.816903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.817014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.817046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.817152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.817190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.817325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.817358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.817596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.817629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.817742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.817775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.817980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.818013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.818188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.818238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.818355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.818387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.818500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.818532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.818709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.818742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.818952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.818986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.819108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.819143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.819374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.819409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.819531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.819563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.819680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.819712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.674 [2024-11-20 16:28:49.819828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.674 [2024-11-20 16:28:49.819860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.674 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.820093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.820125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.820238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.820273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.820410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.820443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.820636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.820668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.820841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.820874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.821008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.821040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.821147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.821180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.821343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.821377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.821484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.821517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.821704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.821737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.821930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.821962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.822149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.822182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.822320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.822355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.822536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.822569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.822693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.822726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.822901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.822933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.823128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.823159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.823277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.823313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.823448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.823480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.823633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.823665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.823801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.823834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.823953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.823985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.824105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.824137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.824260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.824296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.824542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.824575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.824752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.824795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.824974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.825005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.825176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.825225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.825367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.825401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.825587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.825620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.825809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.825842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.826041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.675 [2024-11-20 16:28:49.826073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.675 qpair failed and we were unable to recover it. 00:27:18.675 [2024-11-20 16:28:49.826196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.826245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.826427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.826462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.826603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.826635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.826765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.826797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.826909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.826941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.827064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.827097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.827220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.827254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.827382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.827417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.827591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.827626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.827811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.827844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.827965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.827997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.828215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.828250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.828499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.828531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.828706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.828739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.828843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.828875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.829082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.829115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.829312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.829347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.829488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.829521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.829704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.829738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.829975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.830007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.830130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.830164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.830355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.830390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.830528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.830560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.830688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.830720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.830966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.830997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.831171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.831216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.831407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.831441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.831570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.831603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.831780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.831813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.832011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.832044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.832293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.832328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.832452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.676 [2024-11-20 16:28:49.832485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.676 qpair failed and we were unable to recover it. 00:27:18.676 [2024-11-20 16:28:49.832613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.832646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.832887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.832926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.833116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.833148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.833214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.677 [2024-11-20 16:28:49.833403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.833438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.833563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.833596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.833781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.833813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.833955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.833987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.834231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.834267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.834412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.834446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.834570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.834602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.834776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.834809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.834937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.834970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.835097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.835129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.835386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.835422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.835631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.835671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.835811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.835844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.835978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.836011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.836131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.836163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.836405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.836440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.836627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.836659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.836778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.836812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.837024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.837056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.837248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.837283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.837468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.837502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.837637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.837669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.837858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.837891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.838066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.838100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.838285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.838319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.838512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.838545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.838750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.677 [2024-11-20 16:28:49.838783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.677 qpair failed and we were unable to recover it. 00:27:18.677 [2024-11-20 16:28:49.838910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.838944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.839069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.839101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.839289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.839322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.839431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.839464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.839583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.839615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.839747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.839780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.839951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.839985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.840177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.840215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.840462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.840496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.840646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.840678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.840801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.840833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.841094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.841166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.841335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.841373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.841485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.841517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.841782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.841814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.841933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.841965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.842096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.842127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.842257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.842292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.842474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.842507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.842678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.842712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.842828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.842859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.843048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.843079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.843297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.843332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.843454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.843487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.843601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.843640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.843772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.843805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.843941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.843973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.844224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.844259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.678 qpair failed and we were unable to recover it. 00:27:18.678 [2024-11-20 16:28:49.844391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.678 [2024-11-20 16:28:49.844422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.844621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.844655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.844775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.844807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.844938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.844971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.845150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.845183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.845334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.845366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.845486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.845518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.845634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.845667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.845804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.845835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.846008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.846042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.846162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.846217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.846342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.846375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.846485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.846518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.846700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.846733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.846852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.846884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.847052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.847083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.847216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.847250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.847442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.847474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.847656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.847689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.847797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.847830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.847966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.847999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.848123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.848156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.848287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.848321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.848434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.848472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.848609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.848642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.848845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.848878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.848984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.849016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.849197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.849244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.849368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.849401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.849575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.849607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.849726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.849760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.849876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.849909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.850036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.850069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.850271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.850305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.679 [2024-11-20 16:28:49.850424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.679 [2024-11-20 16:28:49.850457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.679 qpair failed and we were unable to recover it. 00:27:18.680 [2024-11-20 16:28:49.850576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.680 [2024-11-20 16:28:49.850609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.680 qpair failed and we were unable to recover it. 00:27:18.680 [2024-11-20 16:28:49.850801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.680 [2024-11-20 16:28:49.850833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.680 qpair failed and we were unable to recover it. 00:27:18.680 [2024-11-20 16:28:49.850961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.680 [2024-11-20 16:28:49.850994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.680 qpair failed and we were unable to recover it. 00:27:18.680 [2024-11-20 16:28:49.851100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.680 [2024-11-20 16:28:49.851134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.680 qpair failed and we were unable to recover it. 00:27:18.680 [2024-11-20 16:28:49.851265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.680 [2024-11-20 16:28:49.851300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.680 qpair failed and we were unable to recover it. 00:27:18.680 [2024-11-20 16:28:49.851415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.680 [2024-11-20 16:28:49.851447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.680 qpair failed and we were unable to recover it. 00:27:18.680 [2024-11-20 16:28:49.851619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.680 [2024-11-20 16:28:49.851652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.680 qpair failed and we were unable to recover it. 00:27:18.680 [2024-11-20 16:28:49.851831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.680 [2024-11-20 16:28:49.851864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.680 qpair failed and we were unable to recover it. 00:27:18.680 [2024-11-20 16:28:49.851975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.680 [2024-11-20 16:28:49.852007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.680 qpair failed and we were unable to recover it. 00:27:18.680 [2024-11-20 16:28:49.852180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.680 [2024-11-20 16:28:49.852221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.680 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.852339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.852371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.852504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.852536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.852713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.852746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.852853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.852885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.853055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.853087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.853192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.853241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.853436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.853468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.853612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.853645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.853772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.853806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.853992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.854025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.854219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.854253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.854384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.854416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.854549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.854581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.854763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.854795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.854904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.854937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.855043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.855076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.855314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.855348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.855539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.855572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.855747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.957 [2024-11-20 16:28:49.855780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.957 qpair failed and we were unable to recover it. 00:27:18.957 [2024-11-20 16:28:49.855891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.855923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.856032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.856065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.856172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.856211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.856402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.856435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.856611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.856645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.856761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.856793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.856903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.856937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.857066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.857100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.857220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.857254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.857381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.857414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.857632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.857665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.857770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.857803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.857936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.857970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.858102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.858136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.858334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.858367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.858482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.858515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.858636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.858668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.858808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.858841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.859015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.859048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.859181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.859227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.859398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.859430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.859617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.859650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.859760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.859793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.859916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.859950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.860142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.860175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.860362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.860396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.860942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.860989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.861115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.861148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.861290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.958 [2024-11-20 16:28:49.861324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.958 qpair failed and we were unable to recover it. 00:27:18.958 [2024-11-20 16:28:49.861432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.861464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.861596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.861629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.861753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.861786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.861904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.861936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.862042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.862074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.862253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.862287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.862401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.862433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.862616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.862649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.862752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.862785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.862890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.862923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.863121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.863154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.863276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.863310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.863420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.863452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.863557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.863590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.863801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.863834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.863951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.863984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.864116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.864150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.864286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.864318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.864508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.864540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.864661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.864693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.864795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.864827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.864948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.864980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.865111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.865144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.865327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.865360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.865488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.865521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.865650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.865682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.865810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.865841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.865972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.866005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.866136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.866168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.866338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.866407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.959 [2024-11-20 16:28:49.866553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.959 [2024-11-20 16:28:49.866589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.959 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.866701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.866734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.866849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.866881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.867038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.867071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.867175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.867228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.867365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.867396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.867500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.867541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.867720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.867762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.867889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.867921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.868058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.868090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.868218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.868252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.868436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.868468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.868588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.868622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.868758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.868790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.868924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.868957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.869087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.869119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.869243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.869277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.869429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.869462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.869571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.869602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.869711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.869744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.869853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.869886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.870102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.870134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.870257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.870291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.870406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.870438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.870560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.870593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.870711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.870744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.870867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.870900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.871020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.871052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.871169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.871211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.871395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.871427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.871623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.871656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.871870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.871903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.872014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.872046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.960 qpair failed and we were unable to recover it. 00:27:18.960 [2024-11-20 16:28:49.872232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.960 [2024-11-20 16:28:49.872266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.872383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.872415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.872543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.872579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.872789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.872821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.872932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.872964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.873085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.873118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.873241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.873275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.873397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.873430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.873537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.873573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.873823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.873855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.874042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.874074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.874192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.874234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.874421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.874455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.874575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.874607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.874731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.874776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.874908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.874942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.875129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.875163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.875316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.875377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.875503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.875537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.875666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.875703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.875888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.875922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.875966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.961 [2024-11-20 16:28:49.875990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.961 [2024-11-20 16:28:49.875997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.961 [2024-11-20 16:28:49.876004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.961 [2024-11-20 16:28:49.876010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.961 [2024-11-20 16:28:49.876044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.876076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.876218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.876266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.876566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.876613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.876765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.876811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.877020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.877058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.961 qpair failed and we were unable to recover it. 00:27:18.961 [2024-11-20 16:28:49.877280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.961 [2024-11-20 16:28:49.877317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.877451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.877486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.877620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.877657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.877639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:18.962 [2024-11-20 16:28:49.877793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.877745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:18.962 [2024-11-20 16:28:49.877826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 [2024-11-20 16:28:49.877831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.877832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:18.962 [2024-11-20 16:28:49.877943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.877974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.878148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.878193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.878345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.878392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.878689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.878736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.878954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.878991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.879113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.879155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.879360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.879395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.879626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.879658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.879807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.879840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.879964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.879997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.880104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.880138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.880270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.880305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.880493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.880527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.880720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.880753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.880862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.880894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.881015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.881047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.881173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.881218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.881418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.881452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.881575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.881608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.881725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.881756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.881939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.881972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.882151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.882236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.882418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.882477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.882612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.962 [2024-11-20 16:28:49.882647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.962 qpair failed and we were unable to recover it. 00:27:18.962 [2024-11-20 16:28:49.882863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.882896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.883165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.883199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.883348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.883381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.883498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.883531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.883708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.883741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.883846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.883878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.884082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.884115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.884261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.884296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.884561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.884594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.884860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.884892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.885075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.885108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.885234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.885270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.885462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.885495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.885607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.885640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.885756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.885790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.885969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.886001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.886119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.886151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.886287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.886320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.886448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.886480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.886604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.886637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.886828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.886860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.887050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.887084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.887357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.887391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.963 [2024-11-20 16:28:49.887505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.963 [2024-11-20 16:28:49.887538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.963 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.887659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.887699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.887878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.887922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.888069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.888101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.888225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.888260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.888445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.888480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.888664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.888696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.888942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.888974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.889108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.889142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.889324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.889357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.889470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.889503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.889623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.889655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.889762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.889795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.889966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.889998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.890141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.890174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.890315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.890348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.890473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.890510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.890697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.890730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.890860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.890893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.891019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.891052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.891242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.891277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.891460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.891494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.891746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.891779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.891884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.891917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.892089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.892123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.892245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.892280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.892404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.892438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.892610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.892643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.892768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.892809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.892941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.892974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.893178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.893222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.893333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.893367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.964 [2024-11-20 16:28:49.893472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.964 [2024-11-20 16:28:49.893505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.964 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.893632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.893665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.893853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.893886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.894001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.894034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.894138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.894171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.894315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.894351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.894536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.894571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.894686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.894720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.894840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.894874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.895014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.895047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.895262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.895306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.895430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.895465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.895645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.895678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.895807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.895844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.896041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.896078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.896285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.896320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.896438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.896471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.896640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.896675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.896934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.896969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.897109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.897143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.897264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.897300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.897529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.897565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.897756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.897791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.897983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.898026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.898223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.898258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.898445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.898479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.898598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.898633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.898756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.898790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.898897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.898930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.899110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.965 [2024-11-20 16:28:49.899144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.965 qpair failed and we were unable to recover it. 00:27:18.965 [2024-11-20 16:28:49.899277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.899312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.899421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.899455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.899582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.899616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.899740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.899775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.899952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.899988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.900165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.900199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.900488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.900525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.900653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.900687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.900865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.900900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.901022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.901056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.901183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.901229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.901362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.901396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.901503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.901536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.901643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.901677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.901906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.901941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.902145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.902180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.902447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.902484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.902673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.902708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.902921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.902956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.903154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.903187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.903375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.903410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.903610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.903646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.903843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.903877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.904003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.904037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.904183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.904228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.904418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.904452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.904583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.904617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.904750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.904783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.904964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.904997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.966 [2024-11-20 16:28:49.905125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.966 [2024-11-20 16:28:49.905158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.966 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.905319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.905354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.905475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.905508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.905636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.905670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.905858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.905892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.906181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.906261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.906435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.906480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.906733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.906783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.907043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.907077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.907253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.907289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.907485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.907518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.907660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.907692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.907832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.907865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.908074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.908107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.908383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.908417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.908555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.908587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.908836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.908869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.909080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.909113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.909353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.909398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.909595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.909627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.909745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.909778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.909970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.910002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.910199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.910244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.910484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.910516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.910650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.910683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.910806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.910838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.910977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.911010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.911131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.911164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.911316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.911349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.967 [2024-11-20 16:28:49.911541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.967 [2024-11-20 16:28:49.911573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.967 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.911714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.911747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.911923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.911954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.912109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.912143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.912279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.912312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.912486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.912519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.912693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.912724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.912849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.912882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.912993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.913025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.913161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.913193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.913404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.913437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.913600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.913632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.913810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.913842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.913946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.913978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.914105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.914137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.914378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.914411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.914664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.914717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.914847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.914881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.915075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.915108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.915299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.915333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.915533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.915565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.915700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.915733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.915918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.915951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.916149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.916181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.916393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.916427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.916674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.916706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.916883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.916915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.968 qpair failed and we were unable to recover it. 00:27:18.968 [2024-11-20 16:28:49.917100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.968 [2024-11-20 16:28:49.917133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.917250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.917283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.917466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.917499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.917702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.917735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.917934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.917966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.918179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.918220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.918396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.918429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.918547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.918579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.918761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.918793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.918927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.918959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.919153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.919185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.919299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.919331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.919515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.919548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.919677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.919709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.919822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.919854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.919972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.920005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.920217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.920251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.920363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.920394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.920525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.920557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.920676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.920708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.920933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.920965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.921083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.921114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.921234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.921268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.921387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.921418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.921534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.921566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.921760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.921793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.921974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.922006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.922126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.969 [2024-11-20 16:28:49.922157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.969 qpair failed and we were unable to recover it. 00:27:18.969 [2024-11-20 16:28:49.922374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.922408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.922518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.922557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.922677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.922709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.922832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.922865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.922986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.923019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.923258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.923292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.923504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.923538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.923776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.923809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.923925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.923958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.924135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.924167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.924294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.924328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.924468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.924500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.924694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.924726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.924854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.924887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.924995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.925027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.925155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.925189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.925328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.925360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.925488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.925521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.925701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.925733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.925921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.925953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.926085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.926118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.926245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.926279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.926398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.926431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.926561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.926593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.926703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.926736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.926854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.926887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.927068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.927101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.927257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.927291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.927439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.927473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.927661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.927693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.927844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.970 [2024-11-20 16:28:49.927876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.970 qpair failed and we were unable to recover it. 00:27:18.970 [2024-11-20 16:28:49.927985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.928019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.928125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.928157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.928274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.928307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.928459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.928492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.928677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.928710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.928842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.928876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.929056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.929089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.929199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.929245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.929360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.929394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.929596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.929630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.929769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.929809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.929927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.929960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.930145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.930179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.930393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.930427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.930602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.930638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.930747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.930782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.930975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.931008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.931141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.931174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.931305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.931338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.931514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.931549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.931727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.931761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.931882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.931916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.932092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.932125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.932310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.932346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.932612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.932647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.932836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.932870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.932993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.933027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.933166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.933214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.933409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.933477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.971 [2024-11-20 16:28:49.933687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.971 [2024-11-20 16:28:49.933721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.971 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.933833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.933866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.934047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.934080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.934194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.934243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.934358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.934391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.934507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.934540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.934708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.934741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.934865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.934898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.935034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.935075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.935273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.935306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.935480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.935512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.935691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.935723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.935835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.935866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.935968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.935999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.936118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.936149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.936355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.936388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.936560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.936592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.936744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.936775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.936968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.936999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.937186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.937229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.937347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.937378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.937506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.937544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.937667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.937699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.937854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.937885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.938079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.938110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.938248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.938282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.938469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.938500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.938678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.938709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.938905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.938936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.939072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.939104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.939239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.939272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.972 [2024-11-20 16:28:49.939392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.972 [2024-11-20 16:28:49.939423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.972 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.939574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.939605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.939779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.939811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.939996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.940027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.940149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.940180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.940421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.940453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.940561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.940593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.940740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.940772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.940997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.941027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.941269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.941303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.941568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.941600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.941777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.941809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.941989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.942021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.942198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.942242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.942363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.942394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.942579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.942610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.942806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.942839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.942982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.943025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.943216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.943250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.943437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.943469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.943578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.943612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.943805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.943837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.943961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.943994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.944211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.944245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.944358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.944389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.944565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.944597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.944765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.944798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.944906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.944938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.945119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.945151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.945378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.945412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.945538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.973 [2024-11-20 16:28:49.945578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.973 qpair failed and we were unable to recover it. 00:27:18.973 [2024-11-20 16:28:49.945708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.945741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.945864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.945897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.946075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.946108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.946230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.946265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.946457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.946488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.946608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.946640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.946770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.946802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.946993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.947024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.947225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.947259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.947445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.947477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.947654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.947686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.947887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.947919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.948096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.948129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.948327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.948361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.948477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.948509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.948709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.948741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.948937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.948969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.949074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.949106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.949292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.949325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.949513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.949545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.949677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.949710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.949888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.949920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.950048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.974 [2024-11-20 16:28:49.950081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.974 qpair failed and we were unable to recover it. 00:27:18.974 [2024-11-20 16:28:49.950271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.950305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.950494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.950526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.950764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.950796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.950932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.950977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.951098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.951132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.951257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.951291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.951467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.951499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.951692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.951724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.951907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.951939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.952059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.952092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.952274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.952306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.952437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.952469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.952595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.952626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.952817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.952850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.953114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.953146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.953342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.953375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.953602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.953643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.953772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.953804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.953983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.954015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.954233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.954266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.954463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.954495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.954616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.954649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.954826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.954858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.954999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.955030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.955144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.955176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.955323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.955355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.955535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.955567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.955704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.955735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.975 qpair failed and we were unable to recover it. 00:27:18.975 [2024-11-20 16:28:49.955978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.975 [2024-11-20 16:28:49.956010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.956217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.956250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.956445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.956478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.956656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.956688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.956806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.956839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.956954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.956986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.957214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.957248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.957388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.957421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.957670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.957702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.957890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.957922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.958046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.958077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.958249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.958281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.958476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.958507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.958688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.958719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.958962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.958994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.959200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.959254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.959480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.959513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.959637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.959670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.959777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.959809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.959913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.959945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.960066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.960098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.960324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.960358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.960560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.960592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.960768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.960800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.960947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.960979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.961173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.961214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.961342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.961375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.961546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.961578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.961750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.961782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.961969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.962002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.962112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.962143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.962347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.962380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.976 qpair failed and we were unable to recover it. 00:27:18.976 [2024-11-20 16:28:49.962507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.976 [2024-11-20 16:28:49.962540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.962719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.962750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.962861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.962893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.963142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.963174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.963299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.963335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.963455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.963486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.963605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.963636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.963737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.963769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.963957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.963988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.964155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.964187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.964495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.964530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.964664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.964697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.964885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.964917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.965049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.965082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.965266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.965299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.965427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.965459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.965582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.965614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.965751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.965783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.965968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.965999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.966124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.966156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.966406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.966439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.966625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.966657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.966831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.966863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.967068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.967099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.967219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.967252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.967424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.967456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.967628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.967660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.967773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.967805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.967934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.967966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.968071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.968104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.977 qpair failed and we were unable to recover it. 00:27:18.977 [2024-11-20 16:28:49.968226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.977 [2024-11-20 16:28:49.968259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.968375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.968408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.968587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.968619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.968871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.968903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.969113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.969145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.969269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.969302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.969415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.969447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.969559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.969597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.969723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.969756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.969937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.969969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.970085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.970117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.970358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.970391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.970620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.970652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.970826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.970858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.970973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.971005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.971178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.971217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.971352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.971384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.971499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.971530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.971654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.971686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.971807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.971839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.972049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.972081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.972269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.972304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.972478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.972511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.972616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.972648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.972780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.972812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.972925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.972957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.973136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.973168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.973314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.973347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.973462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.973494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.973611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.973642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.973837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.973869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.973976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.974008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.974116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.974148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.974278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.974311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.974416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.974454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.978 [2024-11-20 16:28:49.974568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.978 [2024-11-20 16:28:49.974600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.978 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.974776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.974808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.974993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.975024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.975198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.975240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.975497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.975529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.975632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.975664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.975840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.975872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.976064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.976096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.976283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.976317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.976494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.976526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.976708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.976739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.976916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.976948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.977074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.977106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.977234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.977267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.977384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.977417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.977535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.977567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.977772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.977803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.977909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.977941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.978126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.978158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.978381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.978414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.978540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.978571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.978761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.978794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.978920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.978951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.979071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.979102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.979283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.979317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.979499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.979530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.979713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.979745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.979927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.979958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.980129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.980162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.980288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.980321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.980498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 [2024-11-20 16:28:49.980530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.979 [2024-11-20 16:28:49.980788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.979 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.979 [2024-11-20 16:28:49.980822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.979 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.980949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.980981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.981112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.981145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.980 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.981390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.981428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:18.980 [2024-11-20 16:28:49.981553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.981587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.981722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.981754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:18.980 [2024-11-20 16:28:49.981859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.981894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.982071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.982112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 16:28:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.980 [2024-11-20 16:28:49.982306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.982339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.982514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.982546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.982659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.982690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.982930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.982962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.983146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.983181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.983364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.983397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.983534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.983565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.983753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.983785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.983894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.983925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.984050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.984082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.984296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.984329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.984448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.984480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.984682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.980 [2024-11-20 16:28:49.984720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.980 qpair failed and we were unable to recover it. 00:27:18.980 [2024-11-20 16:28:49.984920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.984953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.985142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.985174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.985468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.985501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.985723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.985756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.985974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.986007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.986144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.986177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.986322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.986354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.986476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.986508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.986685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.986718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.986845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.986877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.986991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.987023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.987246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.987280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.987406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.987438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.987626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.987660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.987849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.987880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.988057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.988088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.988230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.988263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.988450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.988483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.988681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.988714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.988838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.988870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.988995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.989029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.989272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.989306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.989414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.989446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.989567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.989600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.989722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.989754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.989943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.989975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.990110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.990143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.990278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.990311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.990421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.990453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.981 [2024-11-20 16:28:49.990637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.981 [2024-11-20 16:28:49.990670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.981 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.990858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.990890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.991060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.991093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.991236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.991269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.991396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.991430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.991605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.991640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.991767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.991799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.991928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.991961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.992089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.992121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.992306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.992340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.992453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.992491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.992606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.992640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.992766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.992801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.992908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.992941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.993051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.993083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.993197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.993240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.993398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.993433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.993549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.993583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.993772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.993804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.993986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.994020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.994142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.994174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.994303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.994350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.994630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.994664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.994836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.994869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.995001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.995033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.995140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.995172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.995298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.995335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.995459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.995492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.995604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.995636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.995749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.995781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.995890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.995922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.996091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.996123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.996355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.996389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.996494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.996525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.996641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.996672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.996849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.982 [2024-11-20 16:28:49.996881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.982 qpair failed and we were unable to recover it. 00:27:18.982 [2024-11-20 16:28:49.997002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.997033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.997162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.997193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.997310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.997342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.997464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.997496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.997623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.997655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.997765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.997797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.997914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.997945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.998077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.998116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.998238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.998270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.998374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.998406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.998515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.998546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.998718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.998748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.998868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.998900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.999022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.999054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.999175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.999232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.999344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.999375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.999489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.999522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.999637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.999671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.999782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.999814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:49.999934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:49.999968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:50.000079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:50.000111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:50.000222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:50.000256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:50.000379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:50.000412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:50.000598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:50.000629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:50.000735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:50.000768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:50.000875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:50.000909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:50.001020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:50.001052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:50.001179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:50.001221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:50.001350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:50.001383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:50.001492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:50.001524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:50.001649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:50.001682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.983 [2024-11-20 16:28:50.001806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.983 [2024-11-20 16:28:50.001837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.983 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.001936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.001968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.002106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.002140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.002251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.002284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.002464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.002496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.002621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.002654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.002833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.002864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.002975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.003007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.003117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.003150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.003350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.003384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.003510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.003542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.003724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.003757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.003861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.003893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.003998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.004031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.004237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.004271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.004408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.004440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.004547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.004578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.004710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.004743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.004885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.004918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.005037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.005069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.005195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.005238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.005349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.005382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.005496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.005527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.005657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.005696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.005819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.005851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.005969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.006002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.006135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.006171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.006336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.006370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.006489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.006522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.006676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.006720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.984 qpair failed and we were unable to recover it. 00:27:18.984 [2024-11-20 16:28:50.006860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.984 [2024-11-20 16:28:50.006891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.007024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.007057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.007234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.007270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.007394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.007427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.007548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.007580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.007704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.007736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.007857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.007889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.008011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.008045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.008164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.008196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.008320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.008353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.008495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.008527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.008645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.008677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.008799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.008832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.008963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.008996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.009177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.009223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.009375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.009408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.009537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.009570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.009674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.009707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.009832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.009864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.009984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.010016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.010138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.010170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.010302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.010335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.010473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.010505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.010616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.010648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.010783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.010816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.010944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.010975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.011115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.011147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.011288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.011321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.011446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.011478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.011584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.985 [2024-11-20 16:28:50.011616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.985 qpair failed and we were unable to recover it. 00:27:18.985 [2024-11-20 16:28:50.011739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.011771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.011895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.011927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.012065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.012100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.012226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.012265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.012400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.012435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.012553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.012585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.012703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.012734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.012908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.012940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.013124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.013158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.013291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.013324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.013451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.013483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.013611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.013646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.013759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.013791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.013960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.013992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.014116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.014148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.014281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.014315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.014489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.014522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.014632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.014666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.014781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.014807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.015037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.015064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.015161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.015188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.015308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.015335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.015431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.015458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.015624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.015651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.015760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.015787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.015895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.015922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.986 qpair failed and we were unable to recover it. 00:27:18.986 [2024-11-20 16:28:50.016036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.986 [2024-11-20 16:28:50.016063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.016157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.016183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.016302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.016330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.016427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.016454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.016659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.016723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.987 [2024-11-20 16:28:50.016867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.016902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.017022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.017055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:18.987 [2024-11-20 16:28:50.017240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.017276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.017401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.017433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.987 [2024-11-20 16:28:50.017558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.017593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.017712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.017745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.987 [2024-11-20 16:28:50.017873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.017907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.018022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.018054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.018232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.018266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.018392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.018423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.018536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.018578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.018693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.018726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.018901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.018933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.019112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.019144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.019284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.019318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.019497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.019529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.019657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.019689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.019810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.019843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.019953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.019985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.020178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.020221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.020331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.020363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.020467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.020498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.020617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.020649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.020754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.020785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.020925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.020957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.021073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.021105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.987 [2024-11-20 16:28:50.021217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.987 [2024-11-20 16:28:50.021249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.987 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.021422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.021450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.021549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.021576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.021677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.021703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.021828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.021855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.021954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.021980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.022155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.022182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.022313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.022341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.022438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.022465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.022574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.022601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.022697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.022725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.022861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.022921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.023054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.023090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.023210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.023244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.023359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.023391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.023499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.023531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.023713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.023745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.023858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.023890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.024004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.024036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.024166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.024198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.024343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.024377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.024491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.024523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.024700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.024732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.024862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.024894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.025007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.025045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.025227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.025262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.025377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.025409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.025515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.025548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.025741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.025772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.025976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.026009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.026116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.026146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.026341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.026374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.988 [2024-11-20 16:28:50.026485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.988 [2024-11-20 16:28:50.026517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.988 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.026705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.026737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.026850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.026882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.027055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.027087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.027220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.027253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.027380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.027412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.027529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.027560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.027730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.027762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.027881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.027912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.028085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.028116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.028307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.028341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.028460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.028491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.028661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.028692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.028802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.028835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.028947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.028979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.029084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.029116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.029294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.029326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.029436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.029469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.029577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.029608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.029734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.029778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.029892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.029935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.030047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.030080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.030256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.030291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.030393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.030425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.030600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.030633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.030769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.030801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.030916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.030949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.031073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.031107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.031242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.031278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.031414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.989 [2024-11-20 16:28:50.031447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.989 qpair failed and we were unable to recover it. 00:27:18.989 [2024-11-20 16:28:50.031561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.031595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.031710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.031742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.031866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.031898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.032020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.032053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.032167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.032200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.032336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.032370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.032525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.032558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.032768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.032801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.032956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.032990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.033116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.033149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.033291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.033325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.033461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.033492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.033613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.033647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.033769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.033802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.033926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.033959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.034075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.034108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.034240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.034280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.034384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.034417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.034535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.034567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.034682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.034714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.034843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.034876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.034994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.035026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.035144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.035177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.035318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.035353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.035466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.035498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.035619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.035652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.035779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.035811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.035933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.035965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.036070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.036103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.036235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.036269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.036390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.990 [2024-11-20 16:28:50.036423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.990 qpair failed and we were unable to recover it. 00:27:18.990 [2024-11-20 16:28:50.036537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.036570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.036812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.036844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.037038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.037070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.037238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.037271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.037392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.037424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.037535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.037567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.037680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.037713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.037828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.037860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.037971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.038004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.038273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.038307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.038507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.038540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.038650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.038683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.038792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.038830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.039066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.039099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.039223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.039257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.039446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.039478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.039607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.039639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.039812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.039846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.039966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.039998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.040110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.040142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.040274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.040309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.040422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.040454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.040557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.040590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.040705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.040738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.040970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.041003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.041186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.991 [2024-11-20 16:28:50.041231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.991 qpair failed and we were unable to recover it. 00:27:18.991 [2024-11-20 16:28:50.041358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.041390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.041574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.041608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.041742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.041776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.041893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.041925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.042132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.042165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.042311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.042345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.042528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.042561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.042679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.042712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.042895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.042929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.043115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.043148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.043270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.043303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.043547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.043580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.043752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.043785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.043890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.043923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.044051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.044084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.044282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.044316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.044532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.044565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.044688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.044721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.044975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.045008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.045117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.045150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.045341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.045375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.045557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.045589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.045706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.045740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.045978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.046011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.046129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.046162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.046290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.046324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.046453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.046487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.046652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.046710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec98000b90 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.046868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.046925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.047058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.047092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.047273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.047307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.992 [2024-11-20 16:28:50.047427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.992 [2024-11-20 16:28:50.047460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.992 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.047586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.047617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.047740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.047773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.047914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.047947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.048120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.048152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.048353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.048386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.048513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.048547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.048727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.048758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.048872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.048905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.049146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.049188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.049343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.049375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.049562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.049594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.049783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.049815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.049993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.050024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.050148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.050180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.050298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.050330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.050440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.050472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.050580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.050612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.050725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.050757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.050883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.050916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.051193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.051238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.051357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.051396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 Malloc0 00:27:18.993 [2024-11-20 16:28:50.051584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.051616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.051836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.051868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.051976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.052009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.052133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.052165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.052286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.993 [2024-11-20 16:28:50.052320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.052440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.052472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-11-20 16:28:50.052577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-11-20 16:28:50.052609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.994 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:18.994 [2024-11-20 16:28:50.052735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.052768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.052968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.053000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.053129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.053162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.994 [2024-11-20 16:28:50.053354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.053387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.053516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.053548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.053664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.053702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.053830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.053862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.053971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.054002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.054130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.054162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.054290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.054321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.054438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.054470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.054582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.054615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.054792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.054824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.055001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.055033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.055216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.055250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.055364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.055396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.055526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.055559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.055664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.055696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.055818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.055850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.055963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.055994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.056103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.056134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.056264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.056299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.056421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.056452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.056642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.056673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.056868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.056900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.057072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.057104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.057232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.057266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.057379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.057411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.057586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.057617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.057733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.057765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.057948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.057980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-11-20 16:28:50.058108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-11-20 16:28:50.058140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.058269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.058302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.058494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.058526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.058745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.058776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.058884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.058917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.059026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.059057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.995 [2024-11-20 16:28:50.059064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.059251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.059283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.059385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.059418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.059552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.059584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.059696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.059728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.059903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.059935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.060119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.060151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.060355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.060389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.060501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.060534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.060689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.060725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.060832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.060864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.060965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.060998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.061121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.061154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.061279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.061313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.061484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.061517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.061631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.061663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.061838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.061870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.061997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.062029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.062217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.062252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.062497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.062530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.062650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.062683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.062810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.062843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.062974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.063007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.063256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.063290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.063479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.063513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.063689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.063721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.063841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.063874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.064055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-11-20 16:28:50.064088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-11-20 16:28:50.064215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.064249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.064361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.064394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.064917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.064956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.065151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.065188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.065393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.065427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.065535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.065565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.065855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.065888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.065999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.066032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.066241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.066279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.066387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.066420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.066538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.066571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.066742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.066774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.066885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.066917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.067114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.067147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.067272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.067306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.067503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.067536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.067726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.996 [2024-11-20 16:28:50.067760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.067947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.067978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.068099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:18.996 [2024-11-20 16:28:50.068131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.068258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.068291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.068408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.068447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.068708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.068742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.068868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.068900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.069033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.069067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.069192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.069235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.069365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.069397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.069595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.069627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.069733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.069764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.069948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.069979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.070158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.070192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.070326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-11-20 16:28:50.070359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-11-20 16:28:50.070550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.070581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.070752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.070784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.071033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.071065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.071342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.071375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.071561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.071593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.071703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.071735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.071917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.071949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.072066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.072098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.072217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.072250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.072452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.072483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.072681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.072712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.072837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.072869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.072988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.073020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.073262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.073295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.073427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.073458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.073635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.073674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.073853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.073883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.073993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.074027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.074229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.074262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.074365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.074396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.074515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.074546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.074722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.074754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.074874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.074906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.075097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.075129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.075308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.075342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-11-20 16:28:50.075516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-11-20 16:28:50.075548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.075667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.075699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.998 [2024-11-20 16:28:50.075880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.075913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.076034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.076071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:18.998 [2024-11-20 16:28:50.076256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.076288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.076411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.076443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.076655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.998 [2024-11-20 16:28:50.076686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.076812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.076844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.076954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.076985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.998 [2024-11-20 16:28:50.077091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.077123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.077303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.077336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.077467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.077498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.077603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.077634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.077764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.077798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.077996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.078033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.078145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.078180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.078318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.078352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.078508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.078542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.078661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.078693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.078825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.078857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.078977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.079010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.079177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.079253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.079532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.079612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.079817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.079860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.079997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.080035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.080155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.080189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.080351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.080387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.080523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.080558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.080683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.080718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.080840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.080874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.080993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.081026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.081137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-11-20 16:28:50.081170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481ba0 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-11-20 16:28:50.081400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.081470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.081684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.081727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.081865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.081897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.082011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.082044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.082153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.082186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.082318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.082351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.082469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.082502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.082610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.082643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.082768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.082801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.082921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.082953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feca4000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.083089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.083131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.083267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.083303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.083436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.083469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.083711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.083744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.999 [2024-11-20 16:28:50.083951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.083984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.084170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.999 [2024-11-20 16:28:50.084214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.084340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.084373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.084495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.999 [2024-11-20 16:28:50.084527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.084700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.084733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.999 [2024-11-20 16:28:50.084858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.084891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.085007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.085039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.085171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.085215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.085410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.085444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.085627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.085660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.085918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.085951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.086082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.086120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.086248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.086282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.086407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.086441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.086689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.086723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.086914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-11-20 16:28:50.086947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-11-20 16:28:50.087120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-11-20 16:28:50.087153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec9c000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-11-20 16:28:50.087289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.000 [2024-11-20 16:28:50.089764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.000 [2024-11-20 16:28:50.089890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.000 [2024-11-20 16:28:50.089934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.000 [2024-11-20 16:28:50.089957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.000 [2024-11-20 16:28:50.089976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.000 [2024-11-20 16:28:50.090029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.000 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:19.000 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.000 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.000 [2024-11-20 16:28:50.099631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.000 [2024-11-20 16:28:50.099721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.000 [2024-11-20 16:28:50.099753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.000 [2024-11-20 16:28:50.099771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.000 [2024-11-20 16:28:50.099788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.000 [2024-11-20 16:28:50.099823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.000 16:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2080507 00:27:19.000 [2024-11-20 16:28:50.109606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.000 [2024-11-20 16:28:50.109673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.000 [2024-11-20 16:28:50.109695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.000 [2024-11-20 16:28:50.109706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.000 [2024-11-20 16:28:50.109715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.000 [2024-11-20 16:28:50.109738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-11-20 16:28:50.119668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.000 [2024-11-20 16:28:50.119731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.000 [2024-11-20 16:28:50.119746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.000 [2024-11-20 16:28:50.119755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.000 [2024-11-20 16:28:50.119761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.000 [2024-11-20 16:28:50.119778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-11-20 16:28:50.129582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.000 [2024-11-20 16:28:50.129644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.000 [2024-11-20 16:28:50.129662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.000 [2024-11-20 16:28:50.129669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.000 [2024-11-20 16:28:50.129678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.000 [2024-11-20 16:28:50.129694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-11-20 16:28:50.139625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.000 [2024-11-20 16:28:50.139675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.000 [2024-11-20 16:28:50.139688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.000 [2024-11-20 16:28:50.139694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.000 [2024-11-20 16:28:50.139700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.000 [2024-11-20 16:28:50.139714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-11-20 16:28:50.149715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.000 [2024-11-20 16:28:50.149770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.000 [2024-11-20 16:28:50.149784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.000 [2024-11-20 16:28:50.149791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.000 [2024-11-20 16:28:50.149796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.000 [2024-11-20 16:28:50.149811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-11-20 16:28:50.159703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.000 [2024-11-20 16:28:50.159758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.000 [2024-11-20 16:28:50.159771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.000 [2024-11-20 16:28:50.159777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.000 [2024-11-20 16:28:50.159783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.000 [2024-11-20 16:28:50.159798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-11-20 16:28:50.169780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.258 [2024-11-20 16:28:50.169851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.258 [2024-11-20 16:28:50.169865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.258 [2024-11-20 16:28:50.169872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.258 [2024-11-20 16:28:50.169878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.258 [2024-11-20 16:28:50.169892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.258 qpair failed and we were unable to recover it. 00:27:19.258 [2024-11-20 16:28:50.179771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.258 [2024-11-20 16:28:50.179824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.258 [2024-11-20 16:28:50.179837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.258 [2024-11-20 16:28:50.179843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.258 [2024-11-20 16:28:50.179849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.258 [2024-11-20 16:28:50.179864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.258 qpair failed and we were unable to recover it. 00:27:19.258 [2024-11-20 16:28:50.189795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.258 [2024-11-20 16:28:50.189855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.259 [2024-11-20 16:28:50.189869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.259 [2024-11-20 16:28:50.189875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.259 [2024-11-20 16:28:50.189881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.259 [2024-11-20 16:28:50.189896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.259 qpair failed and we were unable to recover it. 00:27:19.259 [2024-11-20 16:28:50.199832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.259 [2024-11-20 16:28:50.199887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.259 [2024-11-20 16:28:50.199902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.259 [2024-11-20 16:28:50.199908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.259 [2024-11-20 16:28:50.199914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.259 [2024-11-20 16:28:50.199929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.259 qpair failed and we were unable to recover it. 00:27:19.259 [2024-11-20 16:28:50.209856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.259 [2024-11-20 16:28:50.209910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.259 [2024-11-20 16:28:50.209924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.259 [2024-11-20 16:28:50.209930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.259 [2024-11-20 16:28:50.209936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.259 [2024-11-20 16:28:50.209951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.259 qpair failed and we were unable to recover it. 00:27:19.259 [2024-11-20 16:28:50.219866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.259 [2024-11-20 16:28:50.219918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.259 [2024-11-20 16:28:50.219932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.259 [2024-11-20 16:28:50.219939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.259 [2024-11-20 16:28:50.219945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.259 [2024-11-20 16:28:50.219959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.259 qpair failed and we were unable to recover it. 00:27:19.259 [2024-11-20 16:28:50.229896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.259 [2024-11-20 16:28:50.229949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.259 [2024-11-20 16:28:50.229962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.259 [2024-11-20 16:28:50.229968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.259 [2024-11-20 16:28:50.229974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.259 [2024-11-20 16:28:50.229989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.259 qpair failed and we were unable to recover it. 00:27:19.259 [2024-11-20 16:28:50.239925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.259 [2024-11-20 16:28:50.239981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.259 [2024-11-20 16:28:50.239994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.259 [2024-11-20 16:28:50.240000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.259 [2024-11-20 16:28:50.240006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.259 [2024-11-20 16:28:50.240021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.259 qpair failed and we were unable to recover it. 00:27:19.259 [2024-11-20 16:28:50.249965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.259 [2024-11-20 16:28:50.250021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.259 [2024-11-20 16:28:50.250033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.259 [2024-11-20 16:28:50.250040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.259 [2024-11-20 16:28:50.250046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.259 [2024-11-20 16:28:50.250060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.259 qpair failed and we were unable to recover it. 00:27:19.259 [2024-11-20 16:28:50.259913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.259 [2024-11-20 16:28:50.259975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.259 [2024-11-20 16:28:50.259989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.259 [2024-11-20 16:28:50.259998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.259 [2024-11-20 16:28:50.260005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.259 [2024-11-20 16:28:50.260019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.259 qpair failed and we were unable to recover it. 00:27:19.259 [2024-11-20 16:28:50.270006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.259 [2024-11-20 16:28:50.270055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.259 [2024-11-20 16:28:50.270068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.259 [2024-11-20 16:28:50.270075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.259 [2024-11-20 16:28:50.270081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.259 [2024-11-20 16:28:50.270096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.259 qpair failed and we were unable to recover it. 00:27:19.259 [2024-11-20 16:28:50.280049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.259 [2024-11-20 16:28:50.280105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.259 [2024-11-20 16:28:50.280119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.259 [2024-11-20 16:28:50.280126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.259 [2024-11-20 16:28:50.280132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.259 [2024-11-20 16:28:50.280148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.259 qpair failed and we were unable to recover it. 00:27:19.259 [2024-11-20 16:28:50.290005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.259 [2024-11-20 16:28:50.290062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.259 [2024-11-20 16:28:50.290075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.259 [2024-11-20 16:28:50.290082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.259 [2024-11-20 16:28:50.290088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.259 [2024-11-20 16:28:50.290103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.259 qpair failed and we were unable to recover it. 00:27:19.259 [2024-11-20 16:28:50.300088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.259 [2024-11-20 16:28:50.300138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.259 [2024-11-20 16:28:50.300151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.259 [2024-11-20 16:28:50.300158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.259 [2024-11-20 16:28:50.300164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.259 [2024-11-20 16:28:50.300182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.259 qpair failed and we were unable to recover it. 00:27:19.259 [2024-11-20 16:28:50.310119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.259 [2024-11-20 16:28:50.310171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.259 [2024-11-20 16:28:50.310185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.259 [2024-11-20 16:28:50.310191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.259 [2024-11-20 16:28:50.310197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.259 [2024-11-20 16:28:50.310215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.259 qpair failed and we were unable to recover it. 00:27:19.259 [2024-11-20 16:28:50.320154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.260 [2024-11-20 16:28:50.320211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.260 [2024-11-20 16:28:50.320224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.260 [2024-11-20 16:28:50.320231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.260 [2024-11-20 16:28:50.320237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.260 [2024-11-20 16:28:50.320252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.260 qpair failed and we were unable to recover it. 00:27:19.260 [2024-11-20 16:28:50.330231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.260 [2024-11-20 16:28:50.330287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.260 [2024-11-20 16:28:50.330301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.260 [2024-11-20 16:28:50.330308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.260 [2024-11-20 16:28:50.330314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.260 [2024-11-20 16:28:50.330329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.260 qpair failed and we were unable to recover it. 00:27:19.260 [2024-11-20 16:28:50.340143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.260 [2024-11-20 16:28:50.340197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.260 [2024-11-20 16:28:50.340217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.260 [2024-11-20 16:28:50.340223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.260 [2024-11-20 16:28:50.340229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.260 [2024-11-20 16:28:50.340245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.260 qpair failed and we were unable to recover it. 00:27:19.260 [2024-11-20 16:28:50.350271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.260 [2024-11-20 16:28:50.350330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.260 [2024-11-20 16:28:50.350344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.260 [2024-11-20 16:28:50.350351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.260 [2024-11-20 16:28:50.350357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.260 [2024-11-20 16:28:50.350371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.260 qpair failed and we were unable to recover it. 00:27:19.260 [2024-11-20 16:28:50.360258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.260 [2024-11-20 16:28:50.360317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.260 [2024-11-20 16:28:50.360331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.260 [2024-11-20 16:28:50.360337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.260 [2024-11-20 16:28:50.360344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.260 [2024-11-20 16:28:50.360358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.260 qpair failed and we were unable to recover it. 00:27:19.260 [2024-11-20 16:28:50.370247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.260 [2024-11-20 16:28:50.370349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.260 [2024-11-20 16:28:50.370362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.260 [2024-11-20 16:28:50.370369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.260 [2024-11-20 16:28:50.370375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.260 [2024-11-20 16:28:50.370390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.260 qpair failed and we were unable to recover it. 00:27:19.260 [2024-11-20 16:28:50.380317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.260 [2024-11-20 16:28:50.380373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.260 [2024-11-20 16:28:50.380387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.260 [2024-11-20 16:28:50.380393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.260 [2024-11-20 16:28:50.380400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.260 [2024-11-20 16:28:50.380414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.260 qpair failed and we were unable to recover it. 00:27:19.260 [2024-11-20 16:28:50.390303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.260 [2024-11-20 16:28:50.390400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.260 [2024-11-20 16:28:50.390415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.260 [2024-11-20 16:28:50.390425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.260 [2024-11-20 16:28:50.390431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.260 [2024-11-20 16:28:50.390446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.260 qpair failed and we were unable to recover it. 00:27:19.260 [2024-11-20 16:28:50.400369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.260 [2024-11-20 16:28:50.400450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.260 [2024-11-20 16:28:50.400464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.260 [2024-11-20 16:28:50.400471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.260 [2024-11-20 16:28:50.400477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.260 [2024-11-20 16:28:50.400491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.260 qpair failed and we were unable to recover it. 00:27:19.260 [2024-11-20 16:28:50.410358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.260 [2024-11-20 16:28:50.410416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.260 [2024-11-20 16:28:50.410429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.260 [2024-11-20 16:28:50.410435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.260 [2024-11-20 16:28:50.410441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.260 [2024-11-20 16:28:50.410456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.260 qpair failed and we were unable to recover it. 00:27:19.260 [2024-11-20 16:28:50.420434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.260 [2024-11-20 16:28:50.420486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.260 [2024-11-20 16:28:50.420499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.260 [2024-11-20 16:28:50.420505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.260 [2024-11-20 16:28:50.420511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.260 [2024-11-20 16:28:50.420526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.260 qpair failed and we were unable to recover it. 00:27:19.260 [2024-11-20 16:28:50.430471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.260 [2024-11-20 16:28:50.430522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.260 [2024-11-20 16:28:50.430536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.260 [2024-11-20 16:28:50.430543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.260 [2024-11-20 16:28:50.430549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.260 [2024-11-20 16:28:50.430567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.260 qpair failed and we were unable to recover it. 00:27:19.260 [2024-11-20 16:28:50.440468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.260 [2024-11-20 16:28:50.440551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.260 [2024-11-20 16:28:50.440564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.260 [2024-11-20 16:28:50.440570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.260 [2024-11-20 16:28:50.440576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.260 [2024-11-20 16:28:50.440590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.260 qpair failed and we were unable to recover it. 00:27:19.260 [2024-11-20 16:28:50.450552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.261 [2024-11-20 16:28:50.450621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.261 [2024-11-20 16:28:50.450635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.261 [2024-11-20 16:28:50.450642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.261 [2024-11-20 16:28:50.450648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.261 [2024-11-20 16:28:50.450662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.261 qpair failed and we were unable to recover it. 00:27:19.261 [2024-11-20 16:28:50.460567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.261 [2024-11-20 16:28:50.460618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.261 [2024-11-20 16:28:50.460631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.261 [2024-11-20 16:28:50.460638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.261 [2024-11-20 16:28:50.460644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.261 [2024-11-20 16:28:50.460659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.261 qpair failed and we were unable to recover it. 00:27:19.261 [2024-11-20 16:28:50.470578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.261 [2024-11-20 16:28:50.470632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.261 [2024-11-20 16:28:50.470646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.261 [2024-11-20 16:28:50.470653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.261 [2024-11-20 16:28:50.470659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.261 [2024-11-20 16:28:50.470674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.261 qpair failed and we were unable to recover it. 00:27:19.261 [2024-11-20 16:28:50.480623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.261 [2024-11-20 16:28:50.480679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.261 [2024-11-20 16:28:50.480693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.261 [2024-11-20 16:28:50.480700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.261 [2024-11-20 16:28:50.480706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.261 [2024-11-20 16:28:50.480721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.261 qpair failed and we were unable to recover it. 00:27:19.521 [2024-11-20 16:28:50.490583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.521 [2024-11-20 16:28:50.490637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.521 [2024-11-20 16:28:50.490650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.521 [2024-11-20 16:28:50.490657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.521 [2024-11-20 16:28:50.490663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.521 [2024-11-20 16:28:50.490677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.521 qpair failed and we were unable to recover it. 00:27:19.521 [2024-11-20 16:28:50.500601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.521 [2024-11-20 16:28:50.500657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.521 [2024-11-20 16:28:50.500670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.521 [2024-11-20 16:28:50.500677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.521 [2024-11-20 16:28:50.500683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.521 [2024-11-20 16:28:50.500698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.521 qpair failed and we were unable to recover it. 00:27:19.521 [2024-11-20 16:28:50.510691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.521 [2024-11-20 16:28:50.510743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.521 [2024-11-20 16:28:50.510757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.521 [2024-11-20 16:28:50.510764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.521 [2024-11-20 16:28:50.510771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.521 [2024-11-20 16:28:50.510784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.521 qpair failed and we were unable to recover it. 00:27:19.521 [2024-11-20 16:28:50.520722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.521 [2024-11-20 16:28:50.520776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.521 [2024-11-20 16:28:50.520794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.521 [2024-11-20 16:28:50.520800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.521 [2024-11-20 16:28:50.520806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.521 [2024-11-20 16:28:50.520820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.521 qpair failed and we were unable to recover it. 00:27:19.521 [2024-11-20 16:28:50.530751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.521 [2024-11-20 16:28:50.530813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.521 [2024-11-20 16:28:50.530828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.521 [2024-11-20 16:28:50.530834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.521 [2024-11-20 16:28:50.530840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.521 [2024-11-20 16:28:50.530855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.521 qpair failed and we were unable to recover it. 00:27:19.521 [2024-11-20 16:28:50.540772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.521 [2024-11-20 16:28:50.540846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.521 [2024-11-20 16:28:50.540859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.521 [2024-11-20 16:28:50.540866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.521 [2024-11-20 16:28:50.540871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.521 [2024-11-20 16:28:50.540886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.521 qpair failed and we were unable to recover it. 00:27:19.521 [2024-11-20 16:28:50.550800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.521 [2024-11-20 16:28:50.550850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.521 [2024-11-20 16:28:50.550864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.521 [2024-11-20 16:28:50.550870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.521 [2024-11-20 16:28:50.550876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.521 [2024-11-20 16:28:50.550890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.521 qpair failed and we were unable to recover it. 00:27:19.521 [2024-11-20 16:28:50.560837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.521 [2024-11-20 16:28:50.560892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.521 [2024-11-20 16:28:50.560906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.521 [2024-11-20 16:28:50.560912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.521 [2024-11-20 16:28:50.560921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.521 [2024-11-20 16:28:50.560936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.521 qpair failed and we were unable to recover it. 00:27:19.521 [2024-11-20 16:28:50.570870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.521 [2024-11-20 16:28:50.570921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.521 [2024-11-20 16:28:50.570936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.521 [2024-11-20 16:28:50.570943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.521 [2024-11-20 16:28:50.570949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.521 [2024-11-20 16:28:50.570963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.521 qpair failed and we were unable to recover it. 00:27:19.521 [2024-11-20 16:28:50.580884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.521 [2024-11-20 16:28:50.580960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.521 [2024-11-20 16:28:50.580974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.521 [2024-11-20 16:28:50.580981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.521 [2024-11-20 16:28:50.580987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.521 [2024-11-20 16:28:50.581002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.521 qpair failed and we were unable to recover it. 00:27:19.521 [2024-11-20 16:28:50.590912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.521 [2024-11-20 16:28:50.590964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.521 [2024-11-20 16:28:50.590977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.521 [2024-11-20 16:28:50.590984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.521 [2024-11-20 16:28:50.590991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.521 [2024-11-20 16:28:50.591005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.521 qpair failed and we were unable to recover it. 00:27:19.521 [2024-11-20 16:28:50.600955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.521 [2024-11-20 16:28:50.601011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.521 [2024-11-20 16:28:50.601025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.521 [2024-11-20 16:28:50.601032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.521 [2024-11-20 16:28:50.601038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.522 [2024-11-20 16:28:50.601053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.522 qpair failed and we were unable to recover it. 00:27:19.522 [2024-11-20 16:28:50.610910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.522 [2024-11-20 16:28:50.610966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.522 [2024-11-20 16:28:50.610980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.522 [2024-11-20 16:28:50.610987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.522 [2024-11-20 16:28:50.610993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.522 [2024-11-20 16:28:50.611007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.522 qpair failed and we were unable to recover it. 00:27:19.522 [2024-11-20 16:28:50.620972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.522 [2024-11-20 16:28:50.621068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.522 [2024-11-20 16:28:50.621082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.522 [2024-11-20 16:28:50.621088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.522 [2024-11-20 16:28:50.621094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.522 [2024-11-20 16:28:50.621109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.522 qpair failed and we were unable to recover it. 00:27:19.522 [2024-11-20 16:28:50.630977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.522 [2024-11-20 16:28:50.631029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.522 [2024-11-20 16:28:50.631043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.522 [2024-11-20 16:28:50.631050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.522 [2024-11-20 16:28:50.631056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.522 [2024-11-20 16:28:50.631072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.522 qpair failed and we were unable to recover it. 00:27:19.522 [2024-11-20 16:28:50.641066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.522 [2024-11-20 16:28:50.641128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.522 [2024-11-20 16:28:50.641142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.522 [2024-11-20 16:28:50.641148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.522 [2024-11-20 16:28:50.641155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.522 [2024-11-20 16:28:50.641169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.522 qpair failed and we were unable to recover it. 00:27:19.522 [2024-11-20 16:28:50.651082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.522 [2024-11-20 16:28:50.651139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.522 [2024-11-20 16:28:50.651155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.522 [2024-11-20 16:28:50.651161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.522 [2024-11-20 16:28:50.651167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.522 [2024-11-20 16:28:50.651182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.522 qpair failed and we were unable to recover it. 00:27:19.522 [2024-11-20 16:28:50.661113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.522 [2024-11-20 16:28:50.661167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.522 [2024-11-20 16:28:50.661180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.522 [2024-11-20 16:28:50.661187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.522 [2024-11-20 16:28:50.661192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.522 [2024-11-20 16:28:50.661210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.522 qpair failed and we were unable to recover it. 00:27:19.522 [2024-11-20 16:28:50.671181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.522 [2024-11-20 16:28:50.671239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.522 [2024-11-20 16:28:50.671253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.522 [2024-11-20 16:28:50.671259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.522 [2024-11-20 16:28:50.671264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.522 [2024-11-20 16:28:50.671279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.522 qpair failed and we were unable to recover it. 00:27:19.522 [2024-11-20 16:28:50.681181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.522 [2024-11-20 16:28:50.681244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.522 [2024-11-20 16:28:50.681258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.522 [2024-11-20 16:28:50.681264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.522 [2024-11-20 16:28:50.681270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.522 [2024-11-20 16:28:50.681284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.522 qpair failed and we were unable to recover it. 00:27:19.522 [2024-11-20 16:28:50.691130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.522 [2024-11-20 16:28:50.691185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.522 [2024-11-20 16:28:50.691199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.522 [2024-11-20 16:28:50.691209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.522 [2024-11-20 16:28:50.691218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.522 [2024-11-20 16:28:50.691233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.522 qpair failed and we were unable to recover it. 00:27:19.522 [2024-11-20 16:28:50.701273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.522 [2024-11-20 16:28:50.701330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.522 [2024-11-20 16:28:50.701345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.522 [2024-11-20 16:28:50.701352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.522 [2024-11-20 16:28:50.701358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.522 [2024-11-20 16:28:50.701372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.522 qpair failed and we were unable to recover it. 00:27:19.522 [2024-11-20 16:28:50.711261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.522 [2024-11-20 16:28:50.711313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.522 [2024-11-20 16:28:50.711327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.522 [2024-11-20 16:28:50.711333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.522 [2024-11-20 16:28:50.711340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.522 [2024-11-20 16:28:50.711355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.522 qpair failed and we were unable to recover it. 00:27:19.522 [2024-11-20 16:28:50.721307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.522 [2024-11-20 16:28:50.721381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.522 [2024-11-20 16:28:50.721395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.522 [2024-11-20 16:28:50.721401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.522 [2024-11-20 16:28:50.721408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.522 [2024-11-20 16:28:50.721422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.522 qpair failed and we were unable to recover it. 00:27:19.522 [2024-11-20 16:28:50.731336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.522 [2024-11-20 16:28:50.731404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.522 [2024-11-20 16:28:50.731418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.522 [2024-11-20 16:28:50.731424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.522 [2024-11-20 16:28:50.731430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.523 [2024-11-20 16:28:50.731445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.523 qpair failed and we were unable to recover it. 00:27:19.523 [2024-11-20 16:28:50.741290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.523 [2024-11-20 16:28:50.741340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.523 [2024-11-20 16:28:50.741354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.523 [2024-11-20 16:28:50.741360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.523 [2024-11-20 16:28:50.741366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.523 [2024-11-20 16:28:50.741380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.523 qpair failed and we were unable to recover it. 00:27:19.782 [2024-11-20 16:28:50.751352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.782 [2024-11-20 16:28:50.751409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.782 [2024-11-20 16:28:50.751422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.782 [2024-11-20 16:28:50.751428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.782 [2024-11-20 16:28:50.751433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.782 [2024-11-20 16:28:50.751448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.782 qpair failed and we were unable to recover it. 00:27:19.782 [2024-11-20 16:28:50.761401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.782 [2024-11-20 16:28:50.761457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.782 [2024-11-20 16:28:50.761471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.782 [2024-11-20 16:28:50.761478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.782 [2024-11-20 16:28:50.761484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.782 [2024-11-20 16:28:50.761498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.782 qpair failed and we were unable to recover it. 00:27:19.782 [2024-11-20 16:28:50.771438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.783 [2024-11-20 16:28:50.771492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.783 [2024-11-20 16:28:50.771505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.783 [2024-11-20 16:28:50.771511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.783 [2024-11-20 16:28:50.771517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.783 [2024-11-20 16:28:50.771531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.783 qpair failed and we were unable to recover it. 00:27:19.783 [2024-11-20 16:28:50.781466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.783 [2024-11-20 16:28:50.781528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.783 [2024-11-20 16:28:50.781541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.783 [2024-11-20 16:28:50.781548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.783 [2024-11-20 16:28:50.781553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.783 [2024-11-20 16:28:50.781568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.783 qpair failed and we were unable to recover it. 00:27:19.783 [2024-11-20 16:28:50.791500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.783 [2024-11-20 16:28:50.791554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.783 [2024-11-20 16:28:50.791567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.783 [2024-11-20 16:28:50.791573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.783 [2024-11-20 16:28:50.791579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.783 [2024-11-20 16:28:50.791593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.783 qpair failed and we were unable to recover it. 00:27:19.783 [2024-11-20 16:28:50.801545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.783 [2024-11-20 16:28:50.801605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.783 [2024-11-20 16:28:50.801617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.783 [2024-11-20 16:28:50.801623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.783 [2024-11-20 16:28:50.801629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.783 [2024-11-20 16:28:50.801644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.783 qpair failed and we were unable to recover it. 00:27:19.783 [2024-11-20 16:28:50.811494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.783 [2024-11-20 16:28:50.811547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.783 [2024-11-20 16:28:50.811559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.783 [2024-11-20 16:28:50.811565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.783 [2024-11-20 16:28:50.811571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.783 [2024-11-20 16:28:50.811586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.783 qpair failed and we were unable to recover it. 00:27:19.783 [2024-11-20 16:28:50.821535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.783 [2024-11-20 16:28:50.821589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.783 [2024-11-20 16:28:50.821602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.783 [2024-11-20 16:28:50.821611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.783 [2024-11-20 16:28:50.821616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.783 [2024-11-20 16:28:50.821630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.783 qpair failed and we were unable to recover it. 00:27:19.783 [2024-11-20 16:28:50.831594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.783 [2024-11-20 16:28:50.831644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.783 [2024-11-20 16:28:50.831657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.783 [2024-11-20 16:28:50.831664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.783 [2024-11-20 16:28:50.831670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.783 [2024-11-20 16:28:50.831684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.783 qpair failed and we were unable to recover it. 00:27:19.783 [2024-11-20 16:28:50.841602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.783 [2024-11-20 16:28:50.841656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.783 [2024-11-20 16:28:50.841669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.783 [2024-11-20 16:28:50.841675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.783 [2024-11-20 16:28:50.841681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.783 [2024-11-20 16:28:50.841695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.783 qpair failed and we were unable to recover it. 00:27:19.783 [2024-11-20 16:28:50.851636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.783 [2024-11-20 16:28:50.851692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.783 [2024-11-20 16:28:50.851704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.783 [2024-11-20 16:28:50.851711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.783 [2024-11-20 16:28:50.851717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.783 [2024-11-20 16:28:50.851731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.783 qpair failed and we were unable to recover it. 00:27:19.783 [2024-11-20 16:28:50.861643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.783 [2024-11-20 16:28:50.861699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.783 [2024-11-20 16:28:50.861712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.783 [2024-11-20 16:28:50.861719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.783 [2024-11-20 16:28:50.861725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.783 [2024-11-20 16:28:50.861743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.783 qpair failed and we were unable to recover it. 00:27:19.783 [2024-11-20 16:28:50.871724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.783 [2024-11-20 16:28:50.871779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.783 [2024-11-20 16:28:50.871791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.783 [2024-11-20 16:28:50.871797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.783 [2024-11-20 16:28:50.871803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.783 [2024-11-20 16:28:50.871817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.783 qpair failed and we were unable to recover it. 00:27:19.783 [2024-11-20 16:28:50.881740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.783 [2024-11-20 16:28:50.881797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.783 [2024-11-20 16:28:50.881809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.783 [2024-11-20 16:28:50.881815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.783 [2024-11-20 16:28:50.881820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.783 [2024-11-20 16:28:50.881835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.783 qpair failed and we were unable to recover it. 00:27:19.783 [2024-11-20 16:28:50.891912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.783 [2024-11-20 16:28:50.891972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.783 [2024-11-20 16:28:50.891984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.783 [2024-11-20 16:28:50.891991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.783 [2024-11-20 16:28:50.891997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.783 [2024-11-20 16:28:50.892011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.783 qpair failed and we were unable to recover it. 00:27:19.784 [2024-11-20 16:28:50.901833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.784 [2024-11-20 16:28:50.901886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.784 [2024-11-20 16:28:50.901900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.784 [2024-11-20 16:28:50.901906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.784 [2024-11-20 16:28:50.901912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.784 [2024-11-20 16:28:50.901926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.784 qpair failed and we were unable to recover it. 00:27:19.784 [2024-11-20 16:28:50.911860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.784 [2024-11-20 16:28:50.911916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.784 [2024-11-20 16:28:50.911928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.784 [2024-11-20 16:28:50.911935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.784 [2024-11-20 16:28:50.911940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.784 [2024-11-20 16:28:50.911954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.784 qpair failed and we were unable to recover it. 00:27:19.784 [2024-11-20 16:28:50.921878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.784 [2024-11-20 16:28:50.921935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.784 [2024-11-20 16:28:50.921947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.784 [2024-11-20 16:28:50.921954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.784 [2024-11-20 16:28:50.921959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.784 [2024-11-20 16:28:50.921974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.784 qpair failed and we were unable to recover it. 00:27:19.784 [2024-11-20 16:28:50.931820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.784 [2024-11-20 16:28:50.931877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.784 [2024-11-20 16:28:50.931890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.784 [2024-11-20 16:28:50.931896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.784 [2024-11-20 16:28:50.931902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.784 [2024-11-20 16:28:50.931917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.784 qpair failed and we were unable to recover it. 00:27:19.784 [2024-11-20 16:28:50.941909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.784 [2024-11-20 16:28:50.941963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.784 [2024-11-20 16:28:50.941976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.784 [2024-11-20 16:28:50.941982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.784 [2024-11-20 16:28:50.941988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.784 [2024-11-20 16:28:50.942002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.784 qpair failed and we were unable to recover it. 00:27:19.784 [2024-11-20 16:28:50.951929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.784 [2024-11-20 16:28:50.951983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.784 [2024-11-20 16:28:50.951999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.784 [2024-11-20 16:28:50.952005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.784 [2024-11-20 16:28:50.952011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.784 [2024-11-20 16:28:50.952025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.784 qpair failed and we were unable to recover it. 00:27:19.784 [2024-11-20 16:28:50.961966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.784 [2024-11-20 16:28:50.962019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.784 [2024-11-20 16:28:50.962032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.784 [2024-11-20 16:28:50.962038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.784 [2024-11-20 16:28:50.962044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.784 [2024-11-20 16:28:50.962058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.784 qpair failed and we were unable to recover it. 00:27:19.784 [2024-11-20 16:28:50.971992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.784 [2024-11-20 16:28:50.972048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.784 [2024-11-20 16:28:50.972060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.784 [2024-11-20 16:28:50.972066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.784 [2024-11-20 16:28:50.972072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.784 [2024-11-20 16:28:50.972086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.784 qpair failed and we were unable to recover it. 00:27:19.784 [2024-11-20 16:28:50.982073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.784 [2024-11-20 16:28:50.982122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.784 [2024-11-20 16:28:50.982134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.784 [2024-11-20 16:28:50.982140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.784 [2024-11-20 16:28:50.982146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.784 [2024-11-20 16:28:50.982160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.784 qpair failed and we were unable to recover it. 00:27:19.784 [2024-11-20 16:28:50.992048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.784 [2024-11-20 16:28:50.992101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.784 [2024-11-20 16:28:50.992114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.784 [2024-11-20 16:28:50.992120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.784 [2024-11-20 16:28:50.992126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.784 [2024-11-20 16:28:50.992144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.784 qpair failed and we were unable to recover it. 00:27:19.784 [2024-11-20 16:28:51.002075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.784 [2024-11-20 16:28:51.002131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.784 [2024-11-20 16:28:51.002144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.784 [2024-11-20 16:28:51.002150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.784 [2024-11-20 16:28:51.002155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.784 [2024-11-20 16:28:51.002169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.784 qpair failed and we were unable to recover it. 00:27:19.784 [2024-11-20 16:28:51.012104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.784 [2024-11-20 16:28:51.012160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.784 [2024-11-20 16:28:51.012173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.784 [2024-11-20 16:28:51.012179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.784 [2024-11-20 16:28:51.012185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:19.784 [2024-11-20 16:28:51.012200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.784 qpair failed and we were unable to recover it. 00:27:20.044 [2024-11-20 16:28:51.022132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.044 [2024-11-20 16:28:51.022184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.044 [2024-11-20 16:28:51.022196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.044 [2024-11-20 16:28:51.022214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.044 [2024-11-20 16:28:51.022221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.044 [2024-11-20 16:28:51.022235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.044 qpair failed and we were unable to recover it. 00:27:20.044 [2024-11-20 16:28:51.032129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.044 [2024-11-20 16:28:51.032182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.044 [2024-11-20 16:28:51.032194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.044 [2024-11-20 16:28:51.032204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.044 [2024-11-20 16:28:51.032210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.044 [2024-11-20 16:28:51.032224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.044 qpair failed and we were unable to recover it. 00:27:20.044 [2024-11-20 16:28:51.042190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.044 [2024-11-20 16:28:51.042254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.044 [2024-11-20 16:28:51.042267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.044 [2024-11-20 16:28:51.042274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.044 [2024-11-20 16:28:51.042280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.044 [2024-11-20 16:28:51.042294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.044 qpair failed and we were unable to recover it. 00:27:20.044 [2024-11-20 16:28:51.052238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.044 [2024-11-20 16:28:51.052307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.044 [2024-11-20 16:28:51.052320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.044 [2024-11-20 16:28:51.052326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.044 [2024-11-20 16:28:51.052332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.044 [2024-11-20 16:28:51.052347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.044 qpair failed and we were unable to recover it. 00:27:20.044 [2024-11-20 16:28:51.062250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.044 [2024-11-20 16:28:51.062303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.044 [2024-11-20 16:28:51.062316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.044 [2024-11-20 16:28:51.062322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.044 [2024-11-20 16:28:51.062328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.045 [2024-11-20 16:28:51.062342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.045 qpair failed and we were unable to recover it. 00:27:20.045 [2024-11-20 16:28:51.072274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.045 [2024-11-20 16:28:51.072334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.045 [2024-11-20 16:28:51.072347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.045 [2024-11-20 16:28:51.072353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.045 [2024-11-20 16:28:51.072359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.045 [2024-11-20 16:28:51.072373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.045 qpair failed and we were unable to recover it. 00:27:20.045 [2024-11-20 16:28:51.082292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.045 [2024-11-20 16:28:51.082346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.045 [2024-11-20 16:28:51.082362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.045 [2024-11-20 16:28:51.082368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.045 [2024-11-20 16:28:51.082374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.045 [2024-11-20 16:28:51.082388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.045 qpair failed and we were unable to recover it. 00:27:20.045 [2024-11-20 16:28:51.092260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.045 [2024-11-20 16:28:51.092315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.045 [2024-11-20 16:28:51.092328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.045 [2024-11-20 16:28:51.092334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.045 [2024-11-20 16:28:51.092340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.045 [2024-11-20 16:28:51.092354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.045 qpair failed and we were unable to recover it. 00:27:20.045 [2024-11-20 16:28:51.102373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.045 [2024-11-20 16:28:51.102447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.045 [2024-11-20 16:28:51.102480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.045 [2024-11-20 16:28:51.102487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.045 [2024-11-20 16:28:51.102493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.045 [2024-11-20 16:28:51.102518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.045 qpair failed and we were unable to recover it. 00:27:20.045 [2024-11-20 16:28:51.112411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.045 [2024-11-20 16:28:51.112465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.045 [2024-11-20 16:28:51.112479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.045 [2024-11-20 16:28:51.112485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.045 [2024-11-20 16:28:51.112491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.045 [2024-11-20 16:28:51.112506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.045 qpair failed and we were unable to recover it. 00:27:20.045 [2024-11-20 16:28:51.122428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.045 [2024-11-20 16:28:51.122508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.045 [2024-11-20 16:28:51.122521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.045 [2024-11-20 16:28:51.122527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.045 [2024-11-20 16:28:51.122535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.045 [2024-11-20 16:28:51.122550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.045 qpair failed and we were unable to recover it. 00:27:20.045 [2024-11-20 16:28:51.132402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.045 [2024-11-20 16:28:51.132455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.045 [2024-11-20 16:28:51.132468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.045 [2024-11-20 16:28:51.132475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.045 [2024-11-20 16:28:51.132480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.045 [2024-11-20 16:28:51.132496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.045 qpair failed and we were unable to recover it. 00:27:20.045 [2024-11-20 16:28:51.142409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.045 [2024-11-20 16:28:51.142460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.045 [2024-11-20 16:28:51.142473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.045 [2024-11-20 16:28:51.142479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.045 [2024-11-20 16:28:51.142485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.045 [2024-11-20 16:28:51.142500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.045 qpair failed and we were unable to recover it. 00:27:20.045 [2024-11-20 16:28:51.152493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.045 [2024-11-20 16:28:51.152544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.045 [2024-11-20 16:28:51.152558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.045 [2024-11-20 16:28:51.152564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.045 [2024-11-20 16:28:51.152570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.045 [2024-11-20 16:28:51.152584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.045 qpair failed and we were unable to recover it. 00:27:20.045 [2024-11-20 16:28:51.162469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.045 [2024-11-20 16:28:51.162545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.045 [2024-11-20 16:28:51.162557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.045 [2024-11-20 16:28:51.162564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.045 [2024-11-20 16:28:51.162569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.045 [2024-11-20 16:28:51.162583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.045 qpair failed and we were unable to recover it. 00:27:20.045 [2024-11-20 16:28:51.172563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.045 [2024-11-20 16:28:51.172619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.045 [2024-11-20 16:28:51.172632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.045 [2024-11-20 16:28:51.172638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.045 [2024-11-20 16:28:51.172643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.045 [2024-11-20 16:28:51.172658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.045 qpair failed and we were unable to recover it. 00:27:20.045 [2024-11-20 16:28:51.182561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.045 [2024-11-20 16:28:51.182662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.045 [2024-11-20 16:28:51.182675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.045 [2024-11-20 16:28:51.182680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.045 [2024-11-20 16:28:51.182687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.045 [2024-11-20 16:28:51.182701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.046 qpair failed and we were unable to recover it. 00:27:20.046 [2024-11-20 16:28:51.192604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.046 [2024-11-20 16:28:51.192662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.046 [2024-11-20 16:28:51.192675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.046 [2024-11-20 16:28:51.192681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.046 [2024-11-20 16:28:51.192687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.046 [2024-11-20 16:28:51.192702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.046 qpair failed and we were unable to recover it. 00:27:20.046 [2024-11-20 16:28:51.202654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.046 [2024-11-20 16:28:51.202711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.046 [2024-11-20 16:28:51.202724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.046 [2024-11-20 16:28:51.202730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.046 [2024-11-20 16:28:51.202736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.046 [2024-11-20 16:28:51.202751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.046 qpair failed and we were unable to recover it. 00:27:20.046 [2024-11-20 16:28:51.212712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.046 [2024-11-20 16:28:51.212793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.046 [2024-11-20 16:28:51.212810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.046 [2024-11-20 16:28:51.212816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.046 [2024-11-20 16:28:51.212822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.046 [2024-11-20 16:28:51.212836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.046 qpair failed and we were unable to recover it. 00:27:20.046 [2024-11-20 16:28:51.222692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.046 [2024-11-20 16:28:51.222743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.046 [2024-11-20 16:28:51.222756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.046 [2024-11-20 16:28:51.222763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.046 [2024-11-20 16:28:51.222768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.046 [2024-11-20 16:28:51.222783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.046 qpair failed and we were unable to recover it. 00:27:20.046 [2024-11-20 16:28:51.232737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.046 [2024-11-20 16:28:51.232788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.046 [2024-11-20 16:28:51.232801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.046 [2024-11-20 16:28:51.232807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.046 [2024-11-20 16:28:51.232813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.046 [2024-11-20 16:28:51.232828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.046 qpair failed and we were unable to recover it. 00:27:20.046 [2024-11-20 16:28:51.242773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.046 [2024-11-20 16:28:51.242830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.046 [2024-11-20 16:28:51.242843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.046 [2024-11-20 16:28:51.242850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.046 [2024-11-20 16:28:51.242856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.046 [2024-11-20 16:28:51.242871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.046 qpair failed and we were unable to recover it. 00:27:20.046 [2024-11-20 16:28:51.252831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.046 [2024-11-20 16:28:51.252898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.046 [2024-11-20 16:28:51.252910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.046 [2024-11-20 16:28:51.252920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.046 [2024-11-20 16:28:51.252926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.046 [2024-11-20 16:28:51.252940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.046 qpair failed and we were unable to recover it. 00:27:20.046 [2024-11-20 16:28:51.262848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.046 [2024-11-20 16:28:51.262903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.046 [2024-11-20 16:28:51.262916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.046 [2024-11-20 16:28:51.262922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.046 [2024-11-20 16:28:51.262928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.046 [2024-11-20 16:28:51.262942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.046 qpair failed and we were unable to recover it. 00:27:20.046 [2024-11-20 16:28:51.272805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.046 [2024-11-20 16:28:51.272886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.046 [2024-11-20 16:28:51.272899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.046 [2024-11-20 16:28:51.272905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.046 [2024-11-20 16:28:51.272911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.046 [2024-11-20 16:28:51.272925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.046 qpair failed and we were unable to recover it. 00:27:20.306 [2024-11-20 16:28:51.282842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.306 [2024-11-20 16:28:51.282898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.306 [2024-11-20 16:28:51.282910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.306 [2024-11-20 16:28:51.282916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.306 [2024-11-20 16:28:51.282922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.306 [2024-11-20 16:28:51.282936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-11-20 16:28:51.292965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.306 [2024-11-20 16:28:51.293020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.306 [2024-11-20 16:28:51.293032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.306 [2024-11-20 16:28:51.293038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.306 [2024-11-20 16:28:51.293044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.306 [2024-11-20 16:28:51.293059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-11-20 16:28:51.302961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.306 [2024-11-20 16:28:51.303015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.306 [2024-11-20 16:28:51.303028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.306 [2024-11-20 16:28:51.303034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.306 [2024-11-20 16:28:51.303039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.306 [2024-11-20 16:28:51.303054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-11-20 16:28:51.312976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.306 [2024-11-20 16:28:51.313029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.306 [2024-11-20 16:28:51.313043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.306 [2024-11-20 16:28:51.313050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.306 [2024-11-20 16:28:51.313055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.306 [2024-11-20 16:28:51.313070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-11-20 16:28:51.322990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.306 [2024-11-20 16:28:51.323049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.306 [2024-11-20 16:28:51.323063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.306 [2024-11-20 16:28:51.323069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.306 [2024-11-20 16:28:51.323075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.306 [2024-11-20 16:28:51.323089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-11-20 16:28:51.333069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.306 [2024-11-20 16:28:51.333127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.306 [2024-11-20 16:28:51.333139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.306 [2024-11-20 16:28:51.333145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.306 [2024-11-20 16:28:51.333152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.306 [2024-11-20 16:28:51.333166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-11-20 16:28:51.343077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.306 [2024-11-20 16:28:51.343133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.306 [2024-11-20 16:28:51.343146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.306 [2024-11-20 16:28:51.343152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.306 [2024-11-20 16:28:51.343158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.306 [2024-11-20 16:28:51.343173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-11-20 16:28:51.353124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.306 [2024-11-20 16:28:51.353181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.306 [2024-11-20 16:28:51.353194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.307 [2024-11-20 16:28:51.353200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.307 [2024-11-20 16:28:51.353209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.307 [2024-11-20 16:28:51.353224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.307 qpair failed and we were unable to recover it. 00:27:20.307 [2024-11-20 16:28:51.363161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.307 [2024-11-20 16:28:51.363222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.307 [2024-11-20 16:28:51.363235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.307 [2024-11-20 16:28:51.363241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.307 [2024-11-20 16:28:51.363247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.307 [2024-11-20 16:28:51.363262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.307 qpair failed and we were unable to recover it. 00:27:20.307 [2024-11-20 16:28:51.373178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.307 [2024-11-20 16:28:51.373240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.307 [2024-11-20 16:28:51.373254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.307 [2024-11-20 16:28:51.373261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.307 [2024-11-20 16:28:51.373269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.307 [2024-11-20 16:28:51.373286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.307 qpair failed and we were unable to recover it. 00:27:20.307 [2024-11-20 16:28:51.383123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.307 [2024-11-20 16:28:51.383180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.307 [2024-11-20 16:28:51.383195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.307 [2024-11-20 16:28:51.383209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.307 [2024-11-20 16:28:51.383215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.307 [2024-11-20 16:28:51.383230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.307 qpair failed and we were unable to recover it. 00:27:20.307 [2024-11-20 16:28:51.393197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.307 [2024-11-20 16:28:51.393295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.307 [2024-11-20 16:28:51.393308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.307 [2024-11-20 16:28:51.393314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.307 [2024-11-20 16:28:51.393320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.307 [2024-11-20 16:28:51.393335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.307 qpair failed and we were unable to recover it. 00:27:20.307 [2024-11-20 16:28:51.403241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.307 [2024-11-20 16:28:51.403340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.307 [2024-11-20 16:28:51.403352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.307 [2024-11-20 16:28:51.403358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.307 [2024-11-20 16:28:51.403364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.307 [2024-11-20 16:28:51.403378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.307 qpair failed and we were unable to recover it. 00:27:20.307 [2024-11-20 16:28:51.413281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.307 [2024-11-20 16:28:51.413354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.307 [2024-11-20 16:28:51.413367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.307 [2024-11-20 16:28:51.413373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.307 [2024-11-20 16:28:51.413379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.307 [2024-11-20 16:28:51.413394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.307 qpair failed and we were unable to recover it. 00:27:20.307 [2024-11-20 16:28:51.423260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.307 [2024-11-20 16:28:51.423319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.307 [2024-11-20 16:28:51.423332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.307 [2024-11-20 16:28:51.423339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.307 [2024-11-20 16:28:51.423344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.307 [2024-11-20 16:28:51.423362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.307 qpair failed and we were unable to recover it. 00:27:20.307 [2024-11-20 16:28:51.433325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.307 [2024-11-20 16:28:51.433421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.307 [2024-11-20 16:28:51.433433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.307 [2024-11-20 16:28:51.433440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.307 [2024-11-20 16:28:51.433445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.307 [2024-11-20 16:28:51.433460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.307 qpair failed and we were unable to recover it. 00:27:20.307 [2024-11-20 16:28:51.443347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.307 [2024-11-20 16:28:51.443401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.307 [2024-11-20 16:28:51.443413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.307 [2024-11-20 16:28:51.443420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.307 [2024-11-20 16:28:51.443426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.307 [2024-11-20 16:28:51.443440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.307 qpair failed and we were unable to recover it. 00:27:20.307 [2024-11-20 16:28:51.453408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.307 [2024-11-20 16:28:51.453495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.307 [2024-11-20 16:28:51.453507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.308 [2024-11-20 16:28:51.453514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.308 [2024-11-20 16:28:51.453519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.308 [2024-11-20 16:28:51.453534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.308 qpair failed and we were unable to recover it. 00:27:20.308 [2024-11-20 16:28:51.463421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.308 [2024-11-20 16:28:51.463476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.308 [2024-11-20 16:28:51.463488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.308 [2024-11-20 16:28:51.463495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.308 [2024-11-20 16:28:51.463501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.308 [2024-11-20 16:28:51.463515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.308 qpair failed and we were unable to recover it. 00:27:20.308 [2024-11-20 16:28:51.473445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.308 [2024-11-20 16:28:51.473501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.308 [2024-11-20 16:28:51.473514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.308 [2024-11-20 16:28:51.473520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.308 [2024-11-20 16:28:51.473526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.308 [2024-11-20 16:28:51.473540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.308 qpair failed and we were unable to recover it. 00:27:20.308 [2024-11-20 16:28:51.483483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.308 [2024-11-20 16:28:51.483537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.308 [2024-11-20 16:28:51.483550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.308 [2024-11-20 16:28:51.483556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.308 [2024-11-20 16:28:51.483562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.308 [2024-11-20 16:28:51.483576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.308 qpair failed and we were unable to recover it. 00:27:20.308 [2024-11-20 16:28:51.493513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.308 [2024-11-20 16:28:51.493567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.308 [2024-11-20 16:28:51.493580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.308 [2024-11-20 16:28:51.493586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.308 [2024-11-20 16:28:51.493592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.308 [2024-11-20 16:28:51.493606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.308 qpair failed and we were unable to recover it. 00:27:20.308 [2024-11-20 16:28:51.503521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.308 [2024-11-20 16:28:51.503573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.308 [2024-11-20 16:28:51.503585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.308 [2024-11-20 16:28:51.503592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.308 [2024-11-20 16:28:51.503597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.308 [2024-11-20 16:28:51.503612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.308 qpair failed and we were unable to recover it. 00:27:20.308 [2024-11-20 16:28:51.513548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.308 [2024-11-20 16:28:51.513598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.308 [2024-11-20 16:28:51.513614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.308 [2024-11-20 16:28:51.513620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.308 [2024-11-20 16:28:51.513626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.308 [2024-11-20 16:28:51.513640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.308 qpair failed and we were unable to recover it. 00:27:20.308 [2024-11-20 16:28:51.523516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.308 [2024-11-20 16:28:51.523571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.308 [2024-11-20 16:28:51.523586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.308 [2024-11-20 16:28:51.523592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.308 [2024-11-20 16:28:51.523598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.308 [2024-11-20 16:28:51.523613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.308 qpair failed and we were unable to recover it. 00:27:20.308 [2024-11-20 16:28:51.533581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.308 [2024-11-20 16:28:51.533635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.308 [2024-11-20 16:28:51.533647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.308 [2024-11-20 16:28:51.533654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.308 [2024-11-20 16:28:51.533660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.308 [2024-11-20 16:28:51.533674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.308 qpair failed and we were unable to recover it. 00:27:20.568 [2024-11-20 16:28:51.543675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.568 [2024-11-20 16:28:51.543732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.568 [2024-11-20 16:28:51.543745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.568 [2024-11-20 16:28:51.543752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.569 [2024-11-20 16:28:51.543758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.569 [2024-11-20 16:28:51.543772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.569 qpair failed and we were unable to recover it. 00:27:20.569 [2024-11-20 16:28:51.553595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.569 [2024-11-20 16:28:51.553647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.569 [2024-11-20 16:28:51.553660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.569 [2024-11-20 16:28:51.553666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.569 [2024-11-20 16:28:51.553672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.569 [2024-11-20 16:28:51.553689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.569 qpair failed and we were unable to recover it. 00:27:20.569 [2024-11-20 16:28:51.563678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.569 [2024-11-20 16:28:51.563750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.569 [2024-11-20 16:28:51.563762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.569 [2024-11-20 16:28:51.563768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.569 [2024-11-20 16:28:51.563774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.569 [2024-11-20 16:28:51.563788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.569 qpair failed and we were unable to recover it. 00:27:20.569 [2024-11-20 16:28:51.573704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.569 [2024-11-20 16:28:51.573758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.569 [2024-11-20 16:28:51.573771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.569 [2024-11-20 16:28:51.573777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.569 [2024-11-20 16:28:51.573783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.569 [2024-11-20 16:28:51.573798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.569 qpair failed and we were unable to recover it. 00:27:20.569 [2024-11-20 16:28:51.583734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.569 [2024-11-20 16:28:51.583789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.569 [2024-11-20 16:28:51.583801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.569 [2024-11-20 16:28:51.583808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.569 [2024-11-20 16:28:51.583814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.569 [2024-11-20 16:28:51.583827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.569 qpair failed and we were unable to recover it. 00:27:20.569 [2024-11-20 16:28:51.593699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.569 [2024-11-20 16:28:51.593750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.569 [2024-11-20 16:28:51.593763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.569 [2024-11-20 16:28:51.593769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.569 [2024-11-20 16:28:51.593775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.569 [2024-11-20 16:28:51.593789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.569 qpair failed and we were unable to recover it. 00:27:20.569 [2024-11-20 16:28:51.603842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.569 [2024-11-20 16:28:51.603919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.569 [2024-11-20 16:28:51.603932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.569 [2024-11-20 16:28:51.603938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.569 [2024-11-20 16:28:51.603944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.569 [2024-11-20 16:28:51.603959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.569 qpair failed and we were unable to recover it. 00:27:20.569 [2024-11-20 16:28:51.613880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.569 [2024-11-20 16:28:51.613942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.569 [2024-11-20 16:28:51.613954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.569 [2024-11-20 16:28:51.613961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.569 [2024-11-20 16:28:51.613966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.569 [2024-11-20 16:28:51.613980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.569 qpair failed and we were unable to recover it. 00:27:20.569 [2024-11-20 16:28:51.623790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.569 [2024-11-20 16:28:51.623842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.569 [2024-11-20 16:28:51.623855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.569 [2024-11-20 16:28:51.623861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.569 [2024-11-20 16:28:51.623867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.569 [2024-11-20 16:28:51.623881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.569 qpair failed and we were unable to recover it. 00:27:20.569 [2024-11-20 16:28:51.633915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.569 [2024-11-20 16:28:51.633971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.569 [2024-11-20 16:28:51.633984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.569 [2024-11-20 16:28:51.633991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.569 [2024-11-20 16:28:51.633997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.569 [2024-11-20 16:28:51.634011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.569 qpair failed and we were unable to recover it. 00:27:20.569 [2024-11-20 16:28:51.643947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.569 [2024-11-20 16:28:51.644056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.569 [2024-11-20 16:28:51.644076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.569 [2024-11-20 16:28:51.644082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.569 [2024-11-20 16:28:51.644088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.569 [2024-11-20 16:28:51.644103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.569 qpair failed and we were unable to recover it. 00:27:20.569 [2024-11-20 16:28:51.653944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.569 [2024-11-20 16:28:51.654004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.569 [2024-11-20 16:28:51.654018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.569 [2024-11-20 16:28:51.654025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.569 [2024-11-20 16:28:51.654031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.569 [2024-11-20 16:28:51.654046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.569 qpair failed and we were unable to recover it. 00:27:20.569 [2024-11-20 16:28:51.663972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.569 [2024-11-20 16:28:51.664020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.569 [2024-11-20 16:28:51.664034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.569 [2024-11-20 16:28:51.664041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.569 [2024-11-20 16:28:51.664047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.569 [2024-11-20 16:28:51.664062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.569 qpair failed and we were unable to recover it. 00:27:20.569 [2024-11-20 16:28:51.674019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.569 [2024-11-20 16:28:51.674079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.569 [2024-11-20 16:28:51.674092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.570 [2024-11-20 16:28:51.674099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.570 [2024-11-20 16:28:51.674104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.570 [2024-11-20 16:28:51.674119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:28:51.683963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.570 [2024-11-20 16:28:51.684018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.570 [2024-11-20 16:28:51.684030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.570 [2024-11-20 16:28:51.684036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.570 [2024-11-20 16:28:51.684045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.570 [2024-11-20 16:28:51.684059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:28:51.694055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.570 [2024-11-20 16:28:51.694109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.570 [2024-11-20 16:28:51.694122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.570 [2024-11-20 16:28:51.694128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.570 [2024-11-20 16:28:51.694134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.570 [2024-11-20 16:28:51.694148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:28:51.704087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.570 [2024-11-20 16:28:51.704139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.570 [2024-11-20 16:28:51.704151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.570 [2024-11-20 16:28:51.704158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.570 [2024-11-20 16:28:51.704164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.570 [2024-11-20 16:28:51.704178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:28:51.714136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.570 [2024-11-20 16:28:51.714198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.570 [2024-11-20 16:28:51.714218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.570 [2024-11-20 16:28:51.714224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.570 [2024-11-20 16:28:51.714231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.570 [2024-11-20 16:28:51.714245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:28:51.724149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.570 [2024-11-20 16:28:51.724229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.570 [2024-11-20 16:28:51.724242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.570 [2024-11-20 16:28:51.724249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.570 [2024-11-20 16:28:51.724255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.570 [2024-11-20 16:28:51.724270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:28:51.734220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.570 [2024-11-20 16:28:51.734328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.570 [2024-11-20 16:28:51.734342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.570 [2024-11-20 16:28:51.734349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.570 [2024-11-20 16:28:51.734355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.570 [2024-11-20 16:28:51.734371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:28:51.744225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.570 [2024-11-20 16:28:51.744279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.570 [2024-11-20 16:28:51.744292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.570 [2024-11-20 16:28:51.744299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.570 [2024-11-20 16:28:51.744305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.570 [2024-11-20 16:28:51.744320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:28:51.754292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.570 [2024-11-20 16:28:51.754348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.570 [2024-11-20 16:28:51.754361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.570 [2024-11-20 16:28:51.754368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.570 [2024-11-20 16:28:51.754373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.570 [2024-11-20 16:28:51.754387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:28:51.764273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.570 [2024-11-20 16:28:51.764327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.570 [2024-11-20 16:28:51.764341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.570 [2024-11-20 16:28:51.764348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.570 [2024-11-20 16:28:51.764354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.570 [2024-11-20 16:28:51.764369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:28:51.774228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.570 [2024-11-20 16:28:51.774282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.570 [2024-11-20 16:28:51.774298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.570 [2024-11-20 16:28:51.774304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.570 [2024-11-20 16:28:51.774309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.570 [2024-11-20 16:28:51.774325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:28:51.784330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.570 [2024-11-20 16:28:51.784382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.570 [2024-11-20 16:28:51.784394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.570 [2024-11-20 16:28:51.784400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.570 [2024-11-20 16:28:51.784406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.570 [2024-11-20 16:28:51.784420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:28:51.794355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.570 [2024-11-20 16:28:51.794404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.570 [2024-11-20 16:28:51.794417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.570 [2024-11-20 16:28:51.794423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.570 [2024-11-20 16:28:51.794428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.570 [2024-11-20 16:28:51.794443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.830 [2024-11-20 16:28:51.804400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.831 [2024-11-20 16:28:51.804454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.831 [2024-11-20 16:28:51.804466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.831 [2024-11-20 16:28:51.804472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.831 [2024-11-20 16:28:51.804478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.831 [2024-11-20 16:28:51.804492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.831 qpair failed and we were unable to recover it. 00:27:20.831 [2024-11-20 16:28:51.814428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.831 [2024-11-20 16:28:51.814485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.831 [2024-11-20 16:28:51.814497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.831 [2024-11-20 16:28:51.814507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.831 [2024-11-20 16:28:51.814512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.831 [2024-11-20 16:28:51.814527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.831 qpair failed and we were unable to recover it. 00:27:20.831 [2024-11-20 16:28:51.824445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.831 [2024-11-20 16:28:51.824522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.831 [2024-11-20 16:28:51.824535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.831 [2024-11-20 16:28:51.824541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.831 [2024-11-20 16:28:51.824547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.831 [2024-11-20 16:28:51.824561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.831 qpair failed and we were unable to recover it. 00:27:20.831 [2024-11-20 16:28:51.834475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.831 [2024-11-20 16:28:51.834523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.831 [2024-11-20 16:28:51.834536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.831 [2024-11-20 16:28:51.834543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.831 [2024-11-20 16:28:51.834548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.831 [2024-11-20 16:28:51.834562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.831 qpair failed and we were unable to recover it. 00:27:20.831 [2024-11-20 16:28:51.844495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.831 [2024-11-20 16:28:51.844564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.831 [2024-11-20 16:28:51.844576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.831 [2024-11-20 16:28:51.844583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.831 [2024-11-20 16:28:51.844588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.831 [2024-11-20 16:28:51.844602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.831 qpair failed and we were unable to recover it. 00:27:20.831 [2024-11-20 16:28:51.854532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.831 [2024-11-20 16:28:51.854589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.831 [2024-11-20 16:28:51.854602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.831 [2024-11-20 16:28:51.854608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.831 [2024-11-20 16:28:51.854614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.831 [2024-11-20 16:28:51.854629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.831 qpair failed and we were unable to recover it. 00:27:20.831 [2024-11-20 16:28:51.864491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.831 [2024-11-20 16:28:51.864544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.831 [2024-11-20 16:28:51.864556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.831 [2024-11-20 16:28:51.864563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.831 [2024-11-20 16:28:51.864568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.831 [2024-11-20 16:28:51.864583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.831 qpair failed and we were unable to recover it. 00:27:20.831 [2024-11-20 16:28:51.874586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.831 [2024-11-20 16:28:51.874638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.831 [2024-11-20 16:28:51.874651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.831 [2024-11-20 16:28:51.874658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.831 [2024-11-20 16:28:51.874663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.831 [2024-11-20 16:28:51.874678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.831 qpair failed and we were unable to recover it. 00:27:20.831 [2024-11-20 16:28:51.884623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.831 [2024-11-20 16:28:51.884678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.831 [2024-11-20 16:28:51.884690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.831 [2024-11-20 16:28:51.884697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.831 [2024-11-20 16:28:51.884703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.831 [2024-11-20 16:28:51.884717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.831 qpair failed and we were unable to recover it. 00:27:20.831 [2024-11-20 16:28:51.894683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.831 [2024-11-20 16:28:51.894741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.831 [2024-11-20 16:28:51.894754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.831 [2024-11-20 16:28:51.894760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.831 [2024-11-20 16:28:51.894766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.831 [2024-11-20 16:28:51.894780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.831 qpair failed and we were unable to recover it. 00:27:20.831 [2024-11-20 16:28:51.904688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.831 [2024-11-20 16:28:51.904757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.831 [2024-11-20 16:28:51.904770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.831 [2024-11-20 16:28:51.904777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.831 [2024-11-20 16:28:51.904782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.831 [2024-11-20 16:28:51.904797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.831 qpair failed and we were unable to recover it. 00:27:20.831 [2024-11-20 16:28:51.914700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.831 [2024-11-20 16:28:51.914753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.831 [2024-11-20 16:28:51.914766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.831 [2024-11-20 16:28:51.914773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.831 [2024-11-20 16:28:51.914778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.831 [2024-11-20 16:28:51.914793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.831 qpair failed and we were unable to recover it. 00:27:20.831 [2024-11-20 16:28:51.924738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.831 [2024-11-20 16:28:51.924794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.831 [2024-11-20 16:28:51.924807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.831 [2024-11-20 16:28:51.924813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.831 [2024-11-20 16:28:51.924820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.832 [2024-11-20 16:28:51.924835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.832 qpair failed and we were unable to recover it. 00:27:20.832 [2024-11-20 16:28:51.934764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.832 [2024-11-20 16:28:51.934813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.832 [2024-11-20 16:28:51.934827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.832 [2024-11-20 16:28:51.934834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.832 [2024-11-20 16:28:51.934839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.832 [2024-11-20 16:28:51.934854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.832 qpair failed and we were unable to recover it. 00:27:20.832 [2024-11-20 16:28:51.944846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.832 [2024-11-20 16:28:51.944903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.832 [2024-11-20 16:28:51.944916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.832 [2024-11-20 16:28:51.944926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.832 [2024-11-20 16:28:51.944931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.832 [2024-11-20 16:28:51.944945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.832 qpair failed and we were unable to recover it. 00:27:20.832 [2024-11-20 16:28:51.954837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.832 [2024-11-20 16:28:51.954890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.832 [2024-11-20 16:28:51.954903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.832 [2024-11-20 16:28:51.954909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.832 [2024-11-20 16:28:51.954915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.832 [2024-11-20 16:28:51.954930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.832 qpair failed and we were unable to recover it. 00:27:20.832 [2024-11-20 16:28:51.964863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.832 [2024-11-20 16:28:51.964920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.832 [2024-11-20 16:28:51.964933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.832 [2024-11-20 16:28:51.964939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.832 [2024-11-20 16:28:51.964945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.832 [2024-11-20 16:28:51.964959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.832 qpair failed and we were unable to recover it. 00:27:20.832 [2024-11-20 16:28:51.974904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.832 [2024-11-20 16:28:51.974960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.832 [2024-11-20 16:28:51.974973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.832 [2024-11-20 16:28:51.974979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.832 [2024-11-20 16:28:51.974985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.832 [2024-11-20 16:28:51.975000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.832 qpair failed and we were unable to recover it. 00:27:20.832 [2024-11-20 16:28:51.984930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.832 [2024-11-20 16:28:51.984991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.832 [2024-11-20 16:28:51.985004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.832 [2024-11-20 16:28:51.985010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.832 [2024-11-20 16:28:51.985016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.832 [2024-11-20 16:28:51.985033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.832 qpair failed and we were unable to recover it. 00:27:20.832 [2024-11-20 16:28:51.994918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.832 [2024-11-20 16:28:51.994973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.832 [2024-11-20 16:28:51.994985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.832 [2024-11-20 16:28:51.994991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.832 [2024-11-20 16:28:51.994997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.832 [2024-11-20 16:28:51.995012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.832 qpair failed and we were unable to recover it. 00:27:20.832 [2024-11-20 16:28:52.004956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.832 [2024-11-20 16:28:52.005013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.832 [2024-11-20 16:28:52.005025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.832 [2024-11-20 16:28:52.005032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.832 [2024-11-20 16:28:52.005038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.832 [2024-11-20 16:28:52.005052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.832 qpair failed and we were unable to recover it. 00:27:20.832 [2024-11-20 16:28:52.014996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.832 [2024-11-20 16:28:52.015054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.832 [2024-11-20 16:28:52.015067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.832 [2024-11-20 16:28:52.015073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.832 [2024-11-20 16:28:52.015079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.832 [2024-11-20 16:28:52.015093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.832 qpair failed and we were unable to recover it. 00:27:20.832 [2024-11-20 16:28:52.025072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.832 [2024-11-20 16:28:52.025137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.832 [2024-11-20 16:28:52.025150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.832 [2024-11-20 16:28:52.025156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.832 [2024-11-20 16:28:52.025162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.832 [2024-11-20 16:28:52.025176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.832 qpair failed and we were unable to recover it. 00:27:20.832 [2024-11-20 16:28:52.035041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.832 [2024-11-20 16:28:52.035096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.832 [2024-11-20 16:28:52.035109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.832 [2024-11-20 16:28:52.035115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.832 [2024-11-20 16:28:52.035120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.832 [2024-11-20 16:28:52.035135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.832 qpair failed and we were unable to recover it. 00:27:20.832 [2024-11-20 16:28:52.045090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.832 [2024-11-20 16:28:52.045143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.832 [2024-11-20 16:28:52.045156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.832 [2024-11-20 16:28:52.045162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.832 [2024-11-20 16:28:52.045169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.832 [2024-11-20 16:28:52.045183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.832 qpair failed and we were unable to recover it. 00:27:20.832 [2024-11-20 16:28:52.055109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.832 [2024-11-20 16:28:52.055166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.832 [2024-11-20 16:28:52.055179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.832 [2024-11-20 16:28:52.055186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.832 [2024-11-20 16:28:52.055192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:20.833 [2024-11-20 16:28:52.055209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.833 qpair failed and we were unable to recover it. 00:27:21.092 [2024-11-20 16:28:52.065147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.093 [2024-11-20 16:28:52.065207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.093 [2024-11-20 16:28:52.065220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.093 [2024-11-20 16:28:52.065226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.093 [2024-11-20 16:28:52.065232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.093 [2024-11-20 16:28:52.065246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.093 qpair failed and we were unable to recover it. 00:27:21.093 [2024-11-20 16:28:52.075164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.093 [2024-11-20 16:28:52.075216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.093 [2024-11-20 16:28:52.075232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.093 [2024-11-20 16:28:52.075238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.093 [2024-11-20 16:28:52.075244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.093 [2024-11-20 16:28:52.075258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.093 qpair failed and we were unable to recover it. 00:27:21.093 [2024-11-20 16:28:52.085239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.093 [2024-11-20 16:28:52.085293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.093 [2024-11-20 16:28:52.085305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.093 [2024-11-20 16:28:52.085311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.093 [2024-11-20 16:28:52.085317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.093 [2024-11-20 16:28:52.085331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.093 qpair failed and we were unable to recover it. 00:27:21.093 [2024-11-20 16:28:52.095259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.093 [2024-11-20 16:28:52.095312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.093 [2024-11-20 16:28:52.095324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.093 [2024-11-20 16:28:52.095331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.093 [2024-11-20 16:28:52.095337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.093 [2024-11-20 16:28:52.095351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.093 qpair failed and we were unable to recover it. 00:27:21.093 [2024-11-20 16:28:52.105312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.093 [2024-11-20 16:28:52.105363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.093 [2024-11-20 16:28:52.105376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.093 [2024-11-20 16:28:52.105382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.093 [2024-11-20 16:28:52.105388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.093 [2024-11-20 16:28:52.105403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.093 qpair failed and we were unable to recover it. 00:27:21.093 [2024-11-20 16:28:52.115294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.093 [2024-11-20 16:28:52.115349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.093 [2024-11-20 16:28:52.115362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.093 [2024-11-20 16:28:52.115368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.093 [2024-11-20 16:28:52.115377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.093 [2024-11-20 16:28:52.115391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.093 qpair failed and we were unable to recover it. 00:27:21.093 [2024-11-20 16:28:52.125325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.093 [2024-11-20 16:28:52.125378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.093 [2024-11-20 16:28:52.125391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.093 [2024-11-20 16:28:52.125397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.093 [2024-11-20 16:28:52.125403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.093 [2024-11-20 16:28:52.125418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.093 qpair failed and we were unable to recover it. 00:27:21.093 [2024-11-20 16:28:52.135354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.093 [2024-11-20 16:28:52.135412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.093 [2024-11-20 16:28:52.135424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.093 [2024-11-20 16:28:52.135431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.093 [2024-11-20 16:28:52.135436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.093 [2024-11-20 16:28:52.135451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.093 qpair failed and we were unable to recover it. 00:27:21.093 [2024-11-20 16:28:52.145386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.093 [2024-11-20 16:28:52.145456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.093 [2024-11-20 16:28:52.145470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.093 [2024-11-20 16:28:52.145476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.093 [2024-11-20 16:28:52.145482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.093 [2024-11-20 16:28:52.145497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.093 qpair failed and we were unable to recover it. 00:27:21.093 [2024-11-20 16:28:52.155404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.093 [2024-11-20 16:28:52.155453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.093 [2024-11-20 16:28:52.155466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.093 [2024-11-20 16:28:52.155472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.093 [2024-11-20 16:28:52.155478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.093 [2024-11-20 16:28:52.155492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.093 qpair failed and we were unable to recover it. 00:27:21.093 [2024-11-20 16:28:52.165449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.093 [2024-11-20 16:28:52.165503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.093 [2024-11-20 16:28:52.165517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.093 [2024-11-20 16:28:52.165523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.093 [2024-11-20 16:28:52.165529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.093 [2024-11-20 16:28:52.165543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.093 qpair failed and we were unable to recover it. 00:27:21.093 [2024-11-20 16:28:52.175508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.093 [2024-11-20 16:28:52.175564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.093 [2024-11-20 16:28:52.175576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.093 [2024-11-20 16:28:52.175583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.093 [2024-11-20 16:28:52.175589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.093 [2024-11-20 16:28:52.175603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.093 qpair failed and we were unable to recover it. 00:27:21.093 [2024-11-20 16:28:52.185495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.093 [2024-11-20 16:28:52.185547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.093 [2024-11-20 16:28:52.185559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.093 [2024-11-20 16:28:52.185565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.093 [2024-11-20 16:28:52.185571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.094 [2024-11-20 16:28:52.185586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.094 qpair failed and we were unable to recover it. 00:27:21.094 [2024-11-20 16:28:52.195529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.094 [2024-11-20 16:28:52.195611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.094 [2024-11-20 16:28:52.195623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.094 [2024-11-20 16:28:52.195630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.094 [2024-11-20 16:28:52.195635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.094 [2024-11-20 16:28:52.195649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.094 qpair failed and we were unable to recover it. 00:27:21.094 [2024-11-20 16:28:52.205559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.094 [2024-11-20 16:28:52.205613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.094 [2024-11-20 16:28:52.205629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.094 [2024-11-20 16:28:52.205636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.094 [2024-11-20 16:28:52.205642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.094 [2024-11-20 16:28:52.205656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.094 qpair failed and we were unable to recover it. 00:27:21.094 [2024-11-20 16:28:52.215581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.094 [2024-11-20 16:28:52.215634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.094 [2024-11-20 16:28:52.215647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.094 [2024-11-20 16:28:52.215653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.094 [2024-11-20 16:28:52.215659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.094 [2024-11-20 16:28:52.215674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.094 qpair failed and we were unable to recover it. 00:27:21.094 [2024-11-20 16:28:52.225632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.094 [2024-11-20 16:28:52.225682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.094 [2024-11-20 16:28:52.225695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.094 [2024-11-20 16:28:52.225702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.094 [2024-11-20 16:28:52.225707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.094 [2024-11-20 16:28:52.225722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.094 qpair failed and we were unable to recover it. 00:27:21.094 [2024-11-20 16:28:52.235692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.094 [2024-11-20 16:28:52.235746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.094 [2024-11-20 16:28:52.235759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.094 [2024-11-20 16:28:52.235766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.094 [2024-11-20 16:28:52.235771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.094 [2024-11-20 16:28:52.235785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.094 qpair failed and we were unable to recover it. 00:27:21.094 [2024-11-20 16:28:52.245679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.094 [2024-11-20 16:28:52.245736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.094 [2024-11-20 16:28:52.245748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.094 [2024-11-20 16:28:52.245755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.094 [2024-11-20 16:28:52.245766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.094 [2024-11-20 16:28:52.245781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.094 qpair failed and we were unable to recover it. 00:27:21.094 [2024-11-20 16:28:52.255702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.094 [2024-11-20 16:28:52.255757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.094 [2024-11-20 16:28:52.255769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.094 [2024-11-20 16:28:52.255775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.094 [2024-11-20 16:28:52.255781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.094 [2024-11-20 16:28:52.255795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.094 qpair failed and we were unable to recover it. 00:27:21.094 [2024-11-20 16:28:52.265728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.094 [2024-11-20 16:28:52.265778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.094 [2024-11-20 16:28:52.265790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.094 [2024-11-20 16:28:52.265797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.094 [2024-11-20 16:28:52.265803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.094 [2024-11-20 16:28:52.265816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.094 qpair failed and we were unable to recover it. 00:27:21.094 [2024-11-20 16:28:52.275754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.094 [2024-11-20 16:28:52.275805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.094 [2024-11-20 16:28:52.275817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.094 [2024-11-20 16:28:52.275823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.094 [2024-11-20 16:28:52.275829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.094 [2024-11-20 16:28:52.275844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.094 qpair failed and we were unable to recover it. 00:27:21.094 [2024-11-20 16:28:52.285734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.094 [2024-11-20 16:28:52.285816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.094 [2024-11-20 16:28:52.285829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.094 [2024-11-20 16:28:52.285836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.094 [2024-11-20 16:28:52.285841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.094 [2024-11-20 16:28:52.285855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.094 qpair failed and we were unable to recover it. 00:27:21.094 [2024-11-20 16:28:52.295746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.094 [2024-11-20 16:28:52.295819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.094 [2024-11-20 16:28:52.295831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.094 [2024-11-20 16:28:52.295838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.094 [2024-11-20 16:28:52.295844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.094 [2024-11-20 16:28:52.295857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.094 qpair failed and we were unable to recover it. 00:27:21.094 [2024-11-20 16:28:52.305848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.094 [2024-11-20 16:28:52.305898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.094 [2024-11-20 16:28:52.305910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.094 [2024-11-20 16:28:52.305917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.094 [2024-11-20 16:28:52.305922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.094 [2024-11-20 16:28:52.305937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.094 qpair failed and we were unable to recover it. 00:27:21.094 [2024-11-20 16:28:52.315865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.094 [2024-11-20 16:28:52.315931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.094 [2024-11-20 16:28:52.315943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.094 [2024-11-20 16:28:52.315949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.094 [2024-11-20 16:28:52.315955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.095 [2024-11-20 16:28:52.315970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.095 qpair failed and we were unable to recover it. 00:27:21.354 [2024-11-20 16:28:52.325898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.354 [2024-11-20 16:28:52.325954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.354 [2024-11-20 16:28:52.325966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.354 [2024-11-20 16:28:52.325973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.354 [2024-11-20 16:28:52.325979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.354 [2024-11-20 16:28:52.325993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.354 qpair failed and we were unable to recover it. 00:27:21.354 [2024-11-20 16:28:52.335947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.354 [2024-11-20 16:28:52.335999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.354 [2024-11-20 16:28:52.336017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.354 [2024-11-20 16:28:52.336023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.354 [2024-11-20 16:28:52.336029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.354 [2024-11-20 16:28:52.336043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.354 qpair failed and we were unable to recover it. 00:27:21.354 [2024-11-20 16:28:52.346009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.354 [2024-11-20 16:28:52.346068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.354 [2024-11-20 16:28:52.346080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.354 [2024-11-20 16:28:52.346087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.354 [2024-11-20 16:28:52.346093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.354 [2024-11-20 16:28:52.346107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.354 qpair failed and we were unable to recover it. 00:27:21.354 [2024-11-20 16:28:52.355994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.354 [2024-11-20 16:28:52.356050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.354 [2024-11-20 16:28:52.356064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.354 [2024-11-20 16:28:52.356071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.354 [2024-11-20 16:28:52.356076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.354 [2024-11-20 16:28:52.356091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.354 qpair failed and we were unable to recover it. 00:27:21.354 [2024-11-20 16:28:52.366030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.354 [2024-11-20 16:28:52.366088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.354 [2024-11-20 16:28:52.366101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.354 [2024-11-20 16:28:52.366108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.354 [2024-11-20 16:28:52.366114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.354 [2024-11-20 16:28:52.366128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.354 qpair failed and we were unable to recover it. 00:27:21.354 [2024-11-20 16:28:52.376102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.354 [2024-11-20 16:28:52.376157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.354 [2024-11-20 16:28:52.376169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.354 [2024-11-20 16:28:52.376179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.354 [2024-11-20 16:28:52.376185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.354 [2024-11-20 16:28:52.376199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.354 qpair failed and we were unable to recover it. 00:27:21.355 [2024-11-20 16:28:52.386014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.355 [2024-11-20 16:28:52.386066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.355 [2024-11-20 16:28:52.386079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.355 [2024-11-20 16:28:52.386085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.355 [2024-11-20 16:28:52.386091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.355 [2024-11-20 16:28:52.386106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.355 qpair failed and we were unable to recover it. 00:27:21.355 [2024-11-20 16:28:52.396108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.355 [2024-11-20 16:28:52.396160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.355 [2024-11-20 16:28:52.396173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.355 [2024-11-20 16:28:52.396179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.355 [2024-11-20 16:28:52.396185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.355 [2024-11-20 16:28:52.396199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.355 qpair failed and we were unable to recover it. 00:27:21.355 [2024-11-20 16:28:52.406147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.355 [2024-11-20 16:28:52.406211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.355 [2024-11-20 16:28:52.406223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.355 [2024-11-20 16:28:52.406230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.355 [2024-11-20 16:28:52.406236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.355 [2024-11-20 16:28:52.406251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.355 qpair failed and we were unable to recover it. 00:27:21.355 [2024-11-20 16:28:52.416178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.355 [2024-11-20 16:28:52.416280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.355 [2024-11-20 16:28:52.416293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.355 [2024-11-20 16:28:52.416299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.355 [2024-11-20 16:28:52.416305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.355 [2024-11-20 16:28:52.416321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.355 qpair failed and we were unable to recover it. 00:27:21.355 [2024-11-20 16:28:52.426197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.355 [2024-11-20 16:28:52.426251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.355 [2024-11-20 16:28:52.426264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.355 [2024-11-20 16:28:52.426270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.355 [2024-11-20 16:28:52.426276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.355 [2024-11-20 16:28:52.426290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.355 qpair failed and we were unable to recover it. 00:27:21.355 [2024-11-20 16:28:52.436220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.355 [2024-11-20 16:28:52.436277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.355 [2024-11-20 16:28:52.436289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.355 [2024-11-20 16:28:52.436296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.355 [2024-11-20 16:28:52.436302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.355 [2024-11-20 16:28:52.436316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.355 qpair failed and we were unable to recover it. 00:27:21.355 [2024-11-20 16:28:52.446256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.355 [2024-11-20 16:28:52.446313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.355 [2024-11-20 16:28:52.446326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.355 [2024-11-20 16:28:52.446332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.355 [2024-11-20 16:28:52.446338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.355 [2024-11-20 16:28:52.446352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.355 qpair failed and we were unable to recover it. 00:27:21.355 [2024-11-20 16:28:52.456236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.355 [2024-11-20 16:28:52.456297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.355 [2024-11-20 16:28:52.456310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.355 [2024-11-20 16:28:52.456317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.355 [2024-11-20 16:28:52.456323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.355 [2024-11-20 16:28:52.456337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.355 qpair failed and we were unable to recover it. 00:27:21.355 [2024-11-20 16:28:52.466324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.355 [2024-11-20 16:28:52.466383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.355 [2024-11-20 16:28:52.466396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.355 [2024-11-20 16:28:52.466402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.355 [2024-11-20 16:28:52.466408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.355 [2024-11-20 16:28:52.466422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.355 qpair failed and we were unable to recover it. 00:27:21.355 [2024-11-20 16:28:52.476281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.355 [2024-11-20 16:28:52.476335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.355 [2024-11-20 16:28:52.476348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.355 [2024-11-20 16:28:52.476354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.355 [2024-11-20 16:28:52.476360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.355 [2024-11-20 16:28:52.476374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.355 qpair failed and we were unable to recover it. 00:27:21.355 [2024-11-20 16:28:52.486395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.355 [2024-11-20 16:28:52.486469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.355 [2024-11-20 16:28:52.486482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.355 [2024-11-20 16:28:52.486488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.355 [2024-11-20 16:28:52.486494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.355 [2024-11-20 16:28:52.486509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.355 qpair failed and we were unable to recover it. 00:27:21.355 [2024-11-20 16:28:52.496406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.355 [2024-11-20 16:28:52.496464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.355 [2024-11-20 16:28:52.496476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.355 [2024-11-20 16:28:52.496482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.355 [2024-11-20 16:28:52.496488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.355 [2024-11-20 16:28:52.496502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.355 qpair failed and we were unable to recover it. 00:27:21.355 [2024-11-20 16:28:52.506467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.355 [2024-11-20 16:28:52.506519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.356 [2024-11-20 16:28:52.506531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.356 [2024-11-20 16:28:52.506541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.356 [2024-11-20 16:28:52.506547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.356 [2024-11-20 16:28:52.506561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.356 qpair failed and we were unable to recover it. 00:27:21.356 [2024-11-20 16:28:52.516475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.356 [2024-11-20 16:28:52.516535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.356 [2024-11-20 16:28:52.516548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.356 [2024-11-20 16:28:52.516554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.356 [2024-11-20 16:28:52.516560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.356 [2024-11-20 16:28:52.516574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.356 qpair failed and we were unable to recover it. 00:27:21.356 [2024-11-20 16:28:52.526495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.356 [2024-11-20 16:28:52.526550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.356 [2024-11-20 16:28:52.526563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.356 [2024-11-20 16:28:52.526570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.356 [2024-11-20 16:28:52.526576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.356 [2024-11-20 16:28:52.526590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.356 qpair failed and we were unable to recover it. 00:27:21.356 [2024-11-20 16:28:52.536522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.356 [2024-11-20 16:28:52.536578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.356 [2024-11-20 16:28:52.536590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.356 [2024-11-20 16:28:52.536596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.356 [2024-11-20 16:28:52.536602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.356 [2024-11-20 16:28:52.536616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.356 qpair failed and we were unable to recover it. 00:27:21.356 [2024-11-20 16:28:52.546538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.356 [2024-11-20 16:28:52.546591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.356 [2024-11-20 16:28:52.546604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.356 [2024-11-20 16:28:52.546610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.356 [2024-11-20 16:28:52.546616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.356 [2024-11-20 16:28:52.546633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.356 qpair failed and we were unable to recover it. 00:27:21.356 [2024-11-20 16:28:52.556538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.356 [2024-11-20 16:28:52.556587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.356 [2024-11-20 16:28:52.556599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.356 [2024-11-20 16:28:52.556606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.356 [2024-11-20 16:28:52.556611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.356 [2024-11-20 16:28:52.556625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.356 qpair failed and we were unable to recover it. 00:27:21.356 [2024-11-20 16:28:52.566607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.356 [2024-11-20 16:28:52.566679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.356 [2024-11-20 16:28:52.566692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.356 [2024-11-20 16:28:52.566698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.356 [2024-11-20 16:28:52.566704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.356 [2024-11-20 16:28:52.566719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.356 qpair failed and we were unable to recover it. 00:27:21.356 [2024-11-20 16:28:52.576567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.356 [2024-11-20 16:28:52.576642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.356 [2024-11-20 16:28:52.576655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.356 [2024-11-20 16:28:52.576661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.356 [2024-11-20 16:28:52.576667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.356 [2024-11-20 16:28:52.576682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.356 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:28:52.586673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.616 [2024-11-20 16:28:52.586731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.616 [2024-11-20 16:28:52.586743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.616 [2024-11-20 16:28:52.586750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.616 [2024-11-20 16:28:52.586756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.616 [2024-11-20 16:28:52.586770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:28:52.596677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.616 [2024-11-20 16:28:52.596726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.616 [2024-11-20 16:28:52.596738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.616 [2024-11-20 16:28:52.596745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.616 [2024-11-20 16:28:52.596750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.616 [2024-11-20 16:28:52.596765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:28:52.606719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.616 [2024-11-20 16:28:52.606775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.616 [2024-11-20 16:28:52.606788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.616 [2024-11-20 16:28:52.606794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.616 [2024-11-20 16:28:52.606800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.616 [2024-11-20 16:28:52.606814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:28:52.616676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.616 [2024-11-20 16:28:52.616729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.616 [2024-11-20 16:28:52.616741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.616 [2024-11-20 16:28:52.616747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.616 [2024-11-20 16:28:52.616753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.616 [2024-11-20 16:28:52.616767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:28:52.626705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.616 [2024-11-20 16:28:52.626762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.616 [2024-11-20 16:28:52.626774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.616 [2024-11-20 16:28:52.626781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.616 [2024-11-20 16:28:52.626787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.616 [2024-11-20 16:28:52.626801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:28:52.636754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.616 [2024-11-20 16:28:52.636838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.616 [2024-11-20 16:28:52.636855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.616 [2024-11-20 16:28:52.636862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.616 [2024-11-20 16:28:52.636868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.616 [2024-11-20 16:28:52.636883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:28:52.646817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.616 [2024-11-20 16:28:52.646871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.616 [2024-11-20 16:28:52.646883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.616 [2024-11-20 16:28:52.646890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.616 [2024-11-20 16:28:52.646895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.616 [2024-11-20 16:28:52.646910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:28:52.656874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.616 [2024-11-20 16:28:52.656939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.616 [2024-11-20 16:28:52.656951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.617 [2024-11-20 16:28:52.656958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.617 [2024-11-20 16:28:52.656964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.617 [2024-11-20 16:28:52.656979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:28:52.666787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.617 [2024-11-20 16:28:52.666848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.617 [2024-11-20 16:28:52.666861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.617 [2024-11-20 16:28:52.666867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.617 [2024-11-20 16:28:52.666873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.617 [2024-11-20 16:28:52.666888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:28:52.676861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.617 [2024-11-20 16:28:52.676912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.617 [2024-11-20 16:28:52.676925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.617 [2024-11-20 16:28:52.676931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.617 [2024-11-20 16:28:52.676941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.617 [2024-11-20 16:28:52.676955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:28:52.686904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.617 [2024-11-20 16:28:52.686959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.617 [2024-11-20 16:28:52.686972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.617 [2024-11-20 16:28:52.686978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.617 [2024-11-20 16:28:52.686984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.617 [2024-11-20 16:28:52.686998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:28:52.696894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.617 [2024-11-20 16:28:52.696969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.617 [2024-11-20 16:28:52.696982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.617 [2024-11-20 16:28:52.696988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.617 [2024-11-20 16:28:52.696993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.617 [2024-11-20 16:28:52.697008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:28:52.706917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.617 [2024-11-20 16:28:52.706971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.617 [2024-11-20 16:28:52.706984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.617 [2024-11-20 16:28:52.706990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.617 [2024-11-20 16:28:52.706996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.617 [2024-11-20 16:28:52.707011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:28:52.717036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.617 [2024-11-20 16:28:52.717092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.617 [2024-11-20 16:28:52.717106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.617 [2024-11-20 16:28:52.717112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.617 [2024-11-20 16:28:52.717119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.617 [2024-11-20 16:28:52.717134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:28:52.727025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.617 [2024-11-20 16:28:52.727081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.617 [2024-11-20 16:28:52.727095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.617 [2024-11-20 16:28:52.727101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.617 [2024-11-20 16:28:52.727107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.617 [2024-11-20 16:28:52.727123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:28:52.737050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.617 [2024-11-20 16:28:52.737108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.617 [2024-11-20 16:28:52.737121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.617 [2024-11-20 16:28:52.737128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.617 [2024-11-20 16:28:52.737135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.617 [2024-11-20 16:28:52.737149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:28:52.747030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.617 [2024-11-20 16:28:52.747082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.617 [2024-11-20 16:28:52.747095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.617 [2024-11-20 16:28:52.747101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.617 [2024-11-20 16:28:52.747107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.617 [2024-11-20 16:28:52.747122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:28:52.757188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.617 [2024-11-20 16:28:52.757268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.617 [2024-11-20 16:28:52.757281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.617 [2024-11-20 16:28:52.757287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.617 [2024-11-20 16:28:52.757293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.617 [2024-11-20 16:28:52.757308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:28:52.767092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.617 [2024-11-20 16:28:52.767157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.617 [2024-11-20 16:28:52.767175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.617 [2024-11-20 16:28:52.767182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.617 [2024-11-20 16:28:52.767188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.617 [2024-11-20 16:28:52.767207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:28:52.777115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.617 [2024-11-20 16:28:52.777177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.617 [2024-11-20 16:28:52.777190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.617 [2024-11-20 16:28:52.777197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.617 [2024-11-20 16:28:52.777207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.617 [2024-11-20 16:28:52.777222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:28:52.787135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.617 [2024-11-20 16:28:52.787194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.618 [2024-11-20 16:28:52.787211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.618 [2024-11-20 16:28:52.787218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.618 [2024-11-20 16:28:52.787223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.618 [2024-11-20 16:28:52.787238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.618 qpair failed and we were unable to recover it. 00:27:21.618 [2024-11-20 16:28:52.797246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.618 [2024-11-20 16:28:52.797298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.618 [2024-11-20 16:28:52.797311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.618 [2024-11-20 16:28:52.797317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.618 [2024-11-20 16:28:52.797323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.618 [2024-11-20 16:28:52.797338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.618 qpair failed and we were unable to recover it. 00:27:21.618 [2024-11-20 16:28:52.807189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.618 [2024-11-20 16:28:52.807251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.618 [2024-11-20 16:28:52.807263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.618 [2024-11-20 16:28:52.807270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.618 [2024-11-20 16:28:52.807279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.618 [2024-11-20 16:28:52.807294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.618 qpair failed and we were unable to recover it. 00:27:21.618 [2024-11-20 16:28:52.817273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.618 [2024-11-20 16:28:52.817350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.618 [2024-11-20 16:28:52.817363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.618 [2024-11-20 16:28:52.817370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.618 [2024-11-20 16:28:52.817375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.618 [2024-11-20 16:28:52.817390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.618 qpair failed and we were unable to recover it. 00:27:21.618 [2024-11-20 16:28:52.827277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.618 [2024-11-20 16:28:52.827328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.618 [2024-11-20 16:28:52.827341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.618 [2024-11-20 16:28:52.827348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.618 [2024-11-20 16:28:52.827354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.618 [2024-11-20 16:28:52.827369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.618 qpair failed and we were unable to recover it. 00:27:21.618 [2024-11-20 16:28:52.837389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.618 [2024-11-20 16:28:52.837473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.618 [2024-11-20 16:28:52.837486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.618 [2024-11-20 16:28:52.837492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.618 [2024-11-20 16:28:52.837498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.618 [2024-11-20 16:28:52.837513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.618 qpair failed and we were unable to recover it. 00:27:21.877 [2024-11-20 16:28:52.847349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.877 [2024-11-20 16:28:52.847405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.877 [2024-11-20 16:28:52.847418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.877 [2024-11-20 16:28:52.847424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.877 [2024-11-20 16:28:52.847430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.877 [2024-11-20 16:28:52.847444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.877 qpair failed and we were unable to recover it. 00:27:21.878 [2024-11-20 16:28:52.857425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.878 [2024-11-20 16:28:52.857482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.878 [2024-11-20 16:28:52.857495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.878 [2024-11-20 16:28:52.857502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.878 [2024-11-20 16:28:52.857508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.878 [2024-11-20 16:28:52.857522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.878 qpair failed and we were unable to recover it. 00:27:21.878 [2024-11-20 16:28:52.867380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.878 [2024-11-20 16:28:52.867436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.878 [2024-11-20 16:28:52.867449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.878 [2024-11-20 16:28:52.867455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.878 [2024-11-20 16:28:52.867461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.878 [2024-11-20 16:28:52.867475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.878 qpair failed and we were unable to recover it. 00:27:21.878 [2024-11-20 16:28:52.877458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.878 [2024-11-20 16:28:52.877512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.878 [2024-11-20 16:28:52.877524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.878 [2024-11-20 16:28:52.877531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.878 [2024-11-20 16:28:52.877536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.878 [2024-11-20 16:28:52.877551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.878 qpair failed and we were unable to recover it. 00:27:21.878 [2024-11-20 16:28:52.887477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.878 [2024-11-20 16:28:52.887564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.878 [2024-11-20 16:28:52.887577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.878 [2024-11-20 16:28:52.887583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.878 [2024-11-20 16:28:52.887589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.878 [2024-11-20 16:28:52.887603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.878 qpair failed and we were unable to recover it. 00:27:21.878 [2024-11-20 16:28:52.897518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.878 [2024-11-20 16:28:52.897588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.878 [2024-11-20 16:28:52.897603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.878 [2024-11-20 16:28:52.897609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.878 [2024-11-20 16:28:52.897615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.878 [2024-11-20 16:28:52.897629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.878 qpair failed and we were unable to recover it. 00:27:21.878 [2024-11-20 16:28:52.907476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.878 [2024-11-20 16:28:52.907532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.878 [2024-11-20 16:28:52.907545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.878 [2024-11-20 16:28:52.907551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.878 [2024-11-20 16:28:52.907557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.878 [2024-11-20 16:28:52.907571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.878 qpair failed and we were unable to recover it. 00:27:21.878 [2024-11-20 16:28:52.917543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.878 [2024-11-20 16:28:52.917597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.878 [2024-11-20 16:28:52.917610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.878 [2024-11-20 16:28:52.917616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.878 [2024-11-20 16:28:52.917622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.878 [2024-11-20 16:28:52.917637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.878 qpair failed and we were unable to recover it. 00:27:21.878 [2024-11-20 16:28:52.927596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.878 [2024-11-20 16:28:52.927653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.878 [2024-11-20 16:28:52.927666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.878 [2024-11-20 16:28:52.927672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.878 [2024-11-20 16:28:52.927678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.878 [2024-11-20 16:28:52.927693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.878 qpair failed and we were unable to recover it. 00:27:21.878 [2024-11-20 16:28:52.937653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.878 [2024-11-20 16:28:52.937708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.878 [2024-11-20 16:28:52.937720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.878 [2024-11-20 16:28:52.937729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.878 [2024-11-20 16:28:52.937735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.878 [2024-11-20 16:28:52.937749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.878 qpair failed and we were unable to recover it. 00:27:21.878 [2024-11-20 16:28:52.947658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.878 [2024-11-20 16:28:52.947711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.878 [2024-11-20 16:28:52.947723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.878 [2024-11-20 16:28:52.947729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.878 [2024-11-20 16:28:52.947735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.878 [2024-11-20 16:28:52.947749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.878 qpair failed and we were unable to recover it. 00:27:21.878 [2024-11-20 16:28:52.957629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.878 [2024-11-20 16:28:52.957680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.878 [2024-11-20 16:28:52.957692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.878 [2024-11-20 16:28:52.957699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.878 [2024-11-20 16:28:52.957704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.878 [2024-11-20 16:28:52.957718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.878 qpair failed and we were unable to recover it. 00:27:21.878 [2024-11-20 16:28:52.967654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.878 [2024-11-20 16:28:52.967713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.878 [2024-11-20 16:28:52.967726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.878 [2024-11-20 16:28:52.967732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.878 [2024-11-20 16:28:52.967738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.878 [2024-11-20 16:28:52.967753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.878 qpair failed and we were unable to recover it. 00:27:21.878 [2024-11-20 16:28:52.977756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.878 [2024-11-20 16:28:52.977827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.878 [2024-11-20 16:28:52.977841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.878 [2024-11-20 16:28:52.977847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.878 [2024-11-20 16:28:52.977853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.878 [2024-11-20 16:28:52.977872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.879 qpair failed and we were unable to recover it. 00:27:21.879 [2024-11-20 16:28:52.987776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.879 [2024-11-20 16:28:52.987828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.879 [2024-11-20 16:28:52.987841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.879 [2024-11-20 16:28:52.987847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.879 [2024-11-20 16:28:52.987853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.879 [2024-11-20 16:28:52.987867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.879 qpair failed and we were unable to recover it. 00:27:21.879 [2024-11-20 16:28:52.997814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.879 [2024-11-20 16:28:52.997882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.879 [2024-11-20 16:28:52.997895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.879 [2024-11-20 16:28:52.997902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.879 [2024-11-20 16:28:52.997907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.879 [2024-11-20 16:28:52.997922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.879 qpair failed and we were unable to recover it. 00:27:21.879 [2024-11-20 16:28:53.007828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.879 [2024-11-20 16:28:53.007886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.879 [2024-11-20 16:28:53.007898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.879 [2024-11-20 16:28:53.007904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.879 [2024-11-20 16:28:53.007910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.879 [2024-11-20 16:28:53.007924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.879 qpair failed and we were unable to recover it. 00:27:21.879 [2024-11-20 16:28:53.017877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.879 [2024-11-20 16:28:53.017933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.879 [2024-11-20 16:28:53.017945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.879 [2024-11-20 16:28:53.017952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.879 [2024-11-20 16:28:53.017958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.879 [2024-11-20 16:28:53.017972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.879 qpair failed and we were unable to recover it. 00:27:21.879 [2024-11-20 16:28:53.027887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.879 [2024-11-20 16:28:53.027940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.879 [2024-11-20 16:28:53.027953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.879 [2024-11-20 16:28:53.027959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.879 [2024-11-20 16:28:53.027965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.879 [2024-11-20 16:28:53.027980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.879 qpair failed and we were unable to recover it. 00:27:21.879 [2024-11-20 16:28:53.037919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.879 [2024-11-20 16:28:53.037972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.879 [2024-11-20 16:28:53.037984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.879 [2024-11-20 16:28:53.037990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.879 [2024-11-20 16:28:53.037996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.879 [2024-11-20 16:28:53.038010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.879 qpair failed and we were unable to recover it. 00:27:21.879 [2024-11-20 16:28:53.047988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.879 [2024-11-20 16:28:53.048061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.879 [2024-11-20 16:28:53.048074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.879 [2024-11-20 16:28:53.048080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.879 [2024-11-20 16:28:53.048086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.879 [2024-11-20 16:28:53.048100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.879 qpair failed and we were unable to recover it. 00:27:21.879 [2024-11-20 16:28:53.057920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.879 [2024-11-20 16:28:53.057976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.879 [2024-11-20 16:28:53.057988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.879 [2024-11-20 16:28:53.057994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.879 [2024-11-20 16:28:53.058001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.879 [2024-11-20 16:28:53.058015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.879 qpair failed and we were unable to recover it. 00:27:21.879 [2024-11-20 16:28:53.068005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.879 [2024-11-20 16:28:53.068080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.879 [2024-11-20 16:28:53.068093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.879 [2024-11-20 16:28:53.068104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.879 [2024-11-20 16:28:53.068110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.879 [2024-11-20 16:28:53.068125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.879 qpair failed and we were unable to recover it. 00:27:21.879 [2024-11-20 16:28:53.077961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.879 [2024-11-20 16:28:53.078014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.879 [2024-11-20 16:28:53.078027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.879 [2024-11-20 16:28:53.078033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.879 [2024-11-20 16:28:53.078040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.879 [2024-11-20 16:28:53.078054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.879 qpair failed and we were unable to recover it. 00:27:21.879 [2024-11-20 16:28:53.088082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.879 [2024-11-20 16:28:53.088135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.879 [2024-11-20 16:28:53.088148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.879 [2024-11-20 16:28:53.088154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.879 [2024-11-20 16:28:53.088160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.879 [2024-11-20 16:28:53.088174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.879 qpair failed and we were unable to recover it. 00:27:21.879 [2024-11-20 16:28:53.098090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.879 [2024-11-20 16:28:53.098145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.879 [2024-11-20 16:28:53.098158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.879 [2024-11-20 16:28:53.098164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.879 [2024-11-20 16:28:53.098170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:21.879 [2024-11-20 16:28:53.098184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.879 qpair failed and we were unable to recover it. 00:27:22.140 [2024-11-20 16:28:53.108111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.140 [2024-11-20 16:28:53.108164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.140 [2024-11-20 16:28:53.108177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.140 [2024-11-20 16:28:53.108183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.140 [2024-11-20 16:28:53.108189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.140 [2024-11-20 16:28:53.108210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.140 qpair failed and we were unable to recover it. 00:27:22.140 [2024-11-20 16:28:53.118137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.140 [2024-11-20 16:28:53.118189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.140 [2024-11-20 16:28:53.118205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.140 [2024-11-20 16:28:53.118212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.140 [2024-11-20 16:28:53.118218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.140 [2024-11-20 16:28:53.118233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.140 qpair failed and we were unable to recover it. 00:27:22.140 [2024-11-20 16:28:53.128195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.140 [2024-11-20 16:28:53.128256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.140 [2024-11-20 16:28:53.128269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.140 [2024-11-20 16:28:53.128275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.140 [2024-11-20 16:28:53.128281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.140 [2024-11-20 16:28:53.128296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.140 qpair failed and we were unable to recover it. 00:27:22.140 [2024-11-20 16:28:53.138209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.140 [2024-11-20 16:28:53.138269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.140 [2024-11-20 16:28:53.138289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.140 [2024-11-20 16:28:53.138296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.140 [2024-11-20 16:28:53.138302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.140 [2024-11-20 16:28:53.138321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.140 qpair failed and we were unable to recover it. 00:27:22.140 [2024-11-20 16:28:53.148280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.140 [2024-11-20 16:28:53.148337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.140 [2024-11-20 16:28:53.148350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.140 [2024-11-20 16:28:53.148357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.140 [2024-11-20 16:28:53.148363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.140 [2024-11-20 16:28:53.148378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.140 qpair failed and we were unable to recover it. 00:27:22.140 [2024-11-20 16:28:53.158250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.140 [2024-11-20 16:28:53.158305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.140 [2024-11-20 16:28:53.158319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.140 [2024-11-20 16:28:53.158325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.140 [2024-11-20 16:28:53.158331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.140 [2024-11-20 16:28:53.158345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.140 qpair failed and we were unable to recover it. 00:27:22.140 [2024-11-20 16:28:53.168211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.140 [2024-11-20 16:28:53.168268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.140 [2024-11-20 16:28:53.168281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.140 [2024-11-20 16:28:53.168287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.140 [2024-11-20 16:28:53.168293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.140 [2024-11-20 16:28:53.168307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.140 qpair failed and we were unable to recover it. 00:27:22.140 [2024-11-20 16:28:53.178361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.140 [2024-11-20 16:28:53.178431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.140 [2024-11-20 16:28:53.178443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.140 [2024-11-20 16:28:53.178449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.140 [2024-11-20 16:28:53.178455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.140 [2024-11-20 16:28:53.178469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.140 qpair failed and we were unable to recover it. 00:27:22.140 [2024-11-20 16:28:53.188384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.140 [2024-11-20 16:28:53.188437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.140 [2024-11-20 16:28:53.188451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.140 [2024-11-20 16:28:53.188458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.140 [2024-11-20 16:28:53.188464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.140 [2024-11-20 16:28:53.188479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.140 qpair failed and we were unable to recover it. 00:27:22.140 [2024-11-20 16:28:53.198424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.140 [2024-11-20 16:28:53.198470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.140 [2024-11-20 16:28:53.198486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.140 [2024-11-20 16:28:53.198492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.140 [2024-11-20 16:28:53.198498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.140 [2024-11-20 16:28:53.198513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.140 qpair failed and we were unable to recover it. 00:27:22.140 [2024-11-20 16:28:53.208433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.140 [2024-11-20 16:28:53.208511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.140 [2024-11-20 16:28:53.208523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.140 [2024-11-20 16:28:53.208529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.140 [2024-11-20 16:28:53.208535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.140 [2024-11-20 16:28:53.208549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.140 qpair failed and we were unable to recover it. 00:27:22.140 [2024-11-20 16:28:53.218490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.140 [2024-11-20 16:28:53.218550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.140 [2024-11-20 16:28:53.218563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.140 [2024-11-20 16:28:53.218569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.140 [2024-11-20 16:28:53.218575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.141 [2024-11-20 16:28:53.218589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.141 qpair failed and we were unable to recover it. 00:27:22.141 [2024-11-20 16:28:53.228451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.141 [2024-11-20 16:28:53.228502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.141 [2024-11-20 16:28:53.228515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.141 [2024-11-20 16:28:53.228521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.141 [2024-11-20 16:28:53.228527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.141 [2024-11-20 16:28:53.228542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.141 qpair failed and we were unable to recover it. 00:27:22.141 [2024-11-20 16:28:53.238520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.141 [2024-11-20 16:28:53.238584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.141 [2024-11-20 16:28:53.238597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.141 [2024-11-20 16:28:53.238603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.141 [2024-11-20 16:28:53.238612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.141 [2024-11-20 16:28:53.238627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.141 qpair failed and we were unable to recover it. 00:27:22.141 [2024-11-20 16:28:53.248547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.141 [2024-11-20 16:28:53.248639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.141 [2024-11-20 16:28:53.248652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.141 [2024-11-20 16:28:53.248659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.141 [2024-11-20 16:28:53.248665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.141 [2024-11-20 16:28:53.248680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.141 qpair failed and we were unable to recover it. 00:27:22.141 [2024-11-20 16:28:53.258554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.141 [2024-11-20 16:28:53.258609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.141 [2024-11-20 16:28:53.258622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.141 [2024-11-20 16:28:53.258629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.141 [2024-11-20 16:28:53.258634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.141 [2024-11-20 16:28:53.258649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.141 qpair failed and we were unable to recover it. 00:27:22.141 [2024-11-20 16:28:53.268564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.141 [2024-11-20 16:28:53.268614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.141 [2024-11-20 16:28:53.268626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.141 [2024-11-20 16:28:53.268633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.141 [2024-11-20 16:28:53.268639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.141 [2024-11-20 16:28:53.268653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.141 qpair failed and we were unable to recover it. 00:27:22.141 [2024-11-20 16:28:53.278621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.141 [2024-11-20 16:28:53.278676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.141 [2024-11-20 16:28:53.278688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.141 [2024-11-20 16:28:53.278695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.141 [2024-11-20 16:28:53.278701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.141 [2024-11-20 16:28:53.278716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.141 qpair failed and we were unable to recover it. 00:27:22.141 [2024-11-20 16:28:53.288630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.141 [2024-11-20 16:28:53.288701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.141 [2024-11-20 16:28:53.288714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.141 [2024-11-20 16:28:53.288721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.141 [2024-11-20 16:28:53.288727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.141 [2024-11-20 16:28:53.288741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.141 qpair failed and we were unable to recover it. 00:27:22.141 [2024-11-20 16:28:53.298694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.141 [2024-11-20 16:28:53.298749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.141 [2024-11-20 16:28:53.298761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.141 [2024-11-20 16:28:53.298768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.141 [2024-11-20 16:28:53.298773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.141 [2024-11-20 16:28:53.298788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.141 qpair failed and we were unable to recover it. 00:27:22.141 [2024-11-20 16:28:53.308674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.141 [2024-11-20 16:28:53.308757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.141 [2024-11-20 16:28:53.308769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.141 [2024-11-20 16:28:53.308776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.141 [2024-11-20 16:28:53.308781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.141 [2024-11-20 16:28:53.308796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.141 qpair failed and we were unable to recover it. 00:27:22.141 [2024-11-20 16:28:53.318643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.141 [2024-11-20 16:28:53.318737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.141 [2024-11-20 16:28:53.318750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.141 [2024-11-20 16:28:53.318756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.141 [2024-11-20 16:28:53.318762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.141 [2024-11-20 16:28:53.318776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.141 qpair failed and we were unable to recover it. 00:27:22.141 [2024-11-20 16:28:53.328790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.141 [2024-11-20 16:28:53.328847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.141 [2024-11-20 16:28:53.328863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.141 [2024-11-20 16:28:53.328870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.141 [2024-11-20 16:28:53.328877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.141 [2024-11-20 16:28:53.328891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.141 qpair failed and we were unable to recover it. 00:27:22.141 [2024-11-20 16:28:53.338773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.141 [2024-11-20 16:28:53.338842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.141 [2024-11-20 16:28:53.338854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.141 [2024-11-20 16:28:53.338860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.141 [2024-11-20 16:28:53.338866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.141 [2024-11-20 16:28:53.338880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.141 qpair failed and we were unable to recover it. 00:27:22.141 [2024-11-20 16:28:53.348795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.141 [2024-11-20 16:28:53.348849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.141 [2024-11-20 16:28:53.348861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.141 [2024-11-20 16:28:53.348867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.141 [2024-11-20 16:28:53.348873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.142 [2024-11-20 16:28:53.348887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.142 qpair failed and we were unable to recover it. 00:27:22.142 [2024-11-20 16:28:53.358817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.142 [2024-11-20 16:28:53.358886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.142 [2024-11-20 16:28:53.358899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.142 [2024-11-20 16:28:53.358905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.142 [2024-11-20 16:28:53.358910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.142 [2024-11-20 16:28:53.358925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.142 qpair failed and we were unable to recover it. 00:27:22.142 [2024-11-20 16:28:53.368789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.142 [2024-11-20 16:28:53.368846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.142 [2024-11-20 16:28:53.368859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.142 [2024-11-20 16:28:53.368865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.142 [2024-11-20 16:28:53.368873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.142 [2024-11-20 16:28:53.368887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.142 qpair failed and we were unable to recover it. 00:27:22.401 [2024-11-20 16:28:53.378886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.401 [2024-11-20 16:28:53.378941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.401 [2024-11-20 16:28:53.378954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.401 [2024-11-20 16:28:53.378960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.402 [2024-11-20 16:28:53.378966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.402 [2024-11-20 16:28:53.378980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.402 qpair failed and we were unable to recover it. 00:27:22.402 [2024-11-20 16:28:53.388926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.402 [2024-11-20 16:28:53.388980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.402 [2024-11-20 16:28:53.388992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.402 [2024-11-20 16:28:53.388998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.402 [2024-11-20 16:28:53.389004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.402 [2024-11-20 16:28:53.389019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.402 qpair failed and we were unable to recover it. 00:27:22.402 [2024-11-20 16:28:53.398972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.402 [2024-11-20 16:28:53.399025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.402 [2024-11-20 16:28:53.399038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.402 [2024-11-20 16:28:53.399045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.402 [2024-11-20 16:28:53.399051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.402 [2024-11-20 16:28:53.399065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.402 qpair failed and we were unable to recover it. 00:27:22.402 [2024-11-20 16:28:53.408920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.402 [2024-11-20 16:28:53.408975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.402 [2024-11-20 16:28:53.408988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.402 [2024-11-20 16:28:53.408994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.402 [2024-11-20 16:28:53.409000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.402 [2024-11-20 16:28:53.409014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.402 qpair failed and we were unable to recover it. 00:27:22.402 [2024-11-20 16:28:53.419020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.402 [2024-11-20 16:28:53.419086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.402 [2024-11-20 16:28:53.419099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.402 [2024-11-20 16:28:53.419105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.402 [2024-11-20 16:28:53.419111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.402 [2024-11-20 16:28:53.419125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.402 qpair failed and we were unable to recover it. 00:27:22.402 [2024-11-20 16:28:53.428968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.402 [2024-11-20 16:28:53.429027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.402 [2024-11-20 16:28:53.429040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.402 [2024-11-20 16:28:53.429046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.402 [2024-11-20 16:28:53.429051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.402 [2024-11-20 16:28:53.429066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.402 qpair failed and we were unable to recover it. 00:27:22.402 [2024-11-20 16:28:53.438973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.402 [2024-11-20 16:28:53.439026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.402 [2024-11-20 16:28:53.439039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.402 [2024-11-20 16:28:53.439045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.402 [2024-11-20 16:28:53.439051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.402 [2024-11-20 16:28:53.439065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.402 qpair failed and we were unable to recover it. 00:27:22.402 [2024-11-20 16:28:53.449090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.402 [2024-11-20 16:28:53.449167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.402 [2024-11-20 16:28:53.449180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.402 [2024-11-20 16:28:53.449186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.402 [2024-11-20 16:28:53.449192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.402 [2024-11-20 16:28:53.449212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.402 qpair failed and we were unable to recover it. 00:27:22.402 [2024-11-20 16:28:53.459115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.402 [2024-11-20 16:28:53.459169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.402 [2024-11-20 16:28:53.459185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.402 [2024-11-20 16:28:53.459192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.402 [2024-11-20 16:28:53.459197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.402 [2024-11-20 16:28:53.459215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.402 qpair failed and we were unable to recover it. 00:27:22.402 [2024-11-20 16:28:53.469130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.402 [2024-11-20 16:28:53.469190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.402 [2024-11-20 16:28:53.469207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.402 [2024-11-20 16:28:53.469213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.402 [2024-11-20 16:28:53.469219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.402 [2024-11-20 16:28:53.469234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.402 qpair failed and we were unable to recover it. 00:27:22.402 [2024-11-20 16:28:53.479183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.402 [2024-11-20 16:28:53.479244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.402 [2024-11-20 16:28:53.479257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.402 [2024-11-20 16:28:53.479264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.402 [2024-11-20 16:28:53.479269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.402 [2024-11-20 16:28:53.479284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.402 qpair failed and we were unable to recover it. 00:27:22.402 [2024-11-20 16:28:53.489200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.402 [2024-11-20 16:28:53.489265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.402 [2024-11-20 16:28:53.489278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.402 [2024-11-20 16:28:53.489285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.402 [2024-11-20 16:28:53.489291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.402 [2024-11-20 16:28:53.489305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.402 qpair failed and we were unable to recover it. 00:27:22.402 [2024-11-20 16:28:53.499263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.402 [2024-11-20 16:28:53.499326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.402 [2024-11-20 16:28:53.499338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.402 [2024-11-20 16:28:53.499348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.402 [2024-11-20 16:28:53.499354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.402 [2024-11-20 16:28:53.499368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.402 qpair failed and we were unable to recover it. 00:27:22.402 [2024-11-20 16:28:53.509304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.402 [2024-11-20 16:28:53.509357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.402 [2024-11-20 16:28:53.509370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.403 [2024-11-20 16:28:53.509376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.403 [2024-11-20 16:28:53.509382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.403 [2024-11-20 16:28:53.509395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.403 qpair failed and we were unable to recover it. 00:27:22.403 [2024-11-20 16:28:53.519280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.403 [2024-11-20 16:28:53.519334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.403 [2024-11-20 16:28:53.519347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.403 [2024-11-20 16:28:53.519353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.403 [2024-11-20 16:28:53.519359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.403 [2024-11-20 16:28:53.519373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.403 qpair failed and we were unable to recover it. 00:27:22.403 [2024-11-20 16:28:53.529310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.403 [2024-11-20 16:28:53.529367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.403 [2024-11-20 16:28:53.529380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.403 [2024-11-20 16:28:53.529386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.403 [2024-11-20 16:28:53.529392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.403 [2024-11-20 16:28:53.529406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.403 qpair failed and we were unable to recover it. 00:27:22.403 [2024-11-20 16:28:53.539274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.403 [2024-11-20 16:28:53.539360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.403 [2024-11-20 16:28:53.539372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.403 [2024-11-20 16:28:53.539379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.403 [2024-11-20 16:28:53.539384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.403 [2024-11-20 16:28:53.539402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.403 qpair failed and we were unable to recover it. 00:27:22.403 [2024-11-20 16:28:53.549351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.403 [2024-11-20 16:28:53.549406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.403 [2024-11-20 16:28:53.549418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.403 [2024-11-20 16:28:53.549425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.403 [2024-11-20 16:28:53.549431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.403 [2024-11-20 16:28:53.549444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.403 qpair failed and we were unable to recover it. 00:27:22.403 [2024-11-20 16:28:53.559384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.403 [2024-11-20 16:28:53.559436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.403 [2024-11-20 16:28:53.559449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.403 [2024-11-20 16:28:53.559455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.403 [2024-11-20 16:28:53.559461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.403 [2024-11-20 16:28:53.559475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.403 qpair failed and we were unable to recover it. 00:27:22.403 [2024-11-20 16:28:53.569457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.403 [2024-11-20 16:28:53.569511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.403 [2024-11-20 16:28:53.569524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.403 [2024-11-20 16:28:53.569530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.403 [2024-11-20 16:28:53.569536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.403 [2024-11-20 16:28:53.569550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.403 qpair failed and we were unable to recover it. 00:27:22.403 [2024-11-20 16:28:53.579450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.403 [2024-11-20 16:28:53.579522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.403 [2024-11-20 16:28:53.579534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.403 [2024-11-20 16:28:53.579540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.403 [2024-11-20 16:28:53.579546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.403 [2024-11-20 16:28:53.579560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.403 qpair failed and we were unable to recover it. 00:27:22.403 [2024-11-20 16:28:53.589466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.403 [2024-11-20 16:28:53.589555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.403 [2024-11-20 16:28:53.589567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.403 [2024-11-20 16:28:53.589573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.403 [2024-11-20 16:28:53.589579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.403 [2024-11-20 16:28:53.589592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.403 qpair failed and we were unable to recover it. 00:27:22.403 [2024-11-20 16:28:53.599503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.403 [2024-11-20 16:28:53.599570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.403 [2024-11-20 16:28:53.599584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.403 [2024-11-20 16:28:53.599591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.403 [2024-11-20 16:28:53.599596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.403 [2024-11-20 16:28:53.599611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.403 qpair failed and we were unable to recover it. 00:27:22.403 [2024-11-20 16:28:53.609534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.403 [2024-11-20 16:28:53.609592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.403 [2024-11-20 16:28:53.609604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.403 [2024-11-20 16:28:53.609611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.403 [2024-11-20 16:28:53.609616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.403 [2024-11-20 16:28:53.609631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.403 qpair failed and we were unable to recover it. 00:27:22.403 [2024-11-20 16:28:53.619559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.403 [2024-11-20 16:28:53.619624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.403 [2024-11-20 16:28:53.619637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.403 [2024-11-20 16:28:53.619643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.403 [2024-11-20 16:28:53.619649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.403 [2024-11-20 16:28:53.619664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.403 qpair failed and we were unable to recover it. 00:27:22.403 [2024-11-20 16:28:53.629518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.403 [2024-11-20 16:28:53.629572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.403 [2024-11-20 16:28:53.629585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.403 [2024-11-20 16:28:53.629594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.403 [2024-11-20 16:28:53.629600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.403 [2024-11-20 16:28:53.629614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.403 qpair failed and we were unable to recover it. 00:27:22.663 [2024-11-20 16:28:53.639611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.663 [2024-11-20 16:28:53.639663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.663 [2024-11-20 16:28:53.639675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.663 [2024-11-20 16:28:53.639682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.663 [2024-11-20 16:28:53.639687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.663 [2024-11-20 16:28:53.639702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.663 qpair failed and we were unable to recover it. 00:27:22.663 [2024-11-20 16:28:53.649628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.663 [2024-11-20 16:28:53.649684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.663 [2024-11-20 16:28:53.649697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.663 [2024-11-20 16:28:53.649703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.663 [2024-11-20 16:28:53.649709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.663 [2024-11-20 16:28:53.649723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.663 qpair failed and we were unable to recover it. 00:27:22.663 [2024-11-20 16:28:53.659719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.663 [2024-11-20 16:28:53.659774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.663 [2024-11-20 16:28:53.659787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.663 [2024-11-20 16:28:53.659793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.663 [2024-11-20 16:28:53.659799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.663 [2024-11-20 16:28:53.659812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.663 qpair failed and we were unable to recover it. 00:27:22.663 [2024-11-20 16:28:53.669692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.664 [2024-11-20 16:28:53.669748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.664 [2024-11-20 16:28:53.669760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.664 [2024-11-20 16:28:53.669767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.664 [2024-11-20 16:28:53.669773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.664 [2024-11-20 16:28:53.669789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.664 qpair failed and we were unable to recover it. 00:27:22.664 [2024-11-20 16:28:53.679726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.664 [2024-11-20 16:28:53.679783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.664 [2024-11-20 16:28:53.679795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.664 [2024-11-20 16:28:53.679801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.664 [2024-11-20 16:28:53.679807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.664 [2024-11-20 16:28:53.679821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.664 qpair failed and we were unable to recover it. 00:27:22.664 [2024-11-20 16:28:53.689754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.664 [2024-11-20 16:28:53.689807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.664 [2024-11-20 16:28:53.689820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.664 [2024-11-20 16:28:53.689827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.664 [2024-11-20 16:28:53.689833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.664 [2024-11-20 16:28:53.689847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.664 qpair failed and we were unable to recover it. 00:27:22.664 [2024-11-20 16:28:53.699814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.664 [2024-11-20 16:28:53.699868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.664 [2024-11-20 16:28:53.699881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.664 [2024-11-20 16:28:53.699887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.664 [2024-11-20 16:28:53.699893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.664 [2024-11-20 16:28:53.699907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.664 qpair failed and we were unable to recover it. 00:27:22.664 [2024-11-20 16:28:53.709735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.664 [2024-11-20 16:28:53.709789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.664 [2024-11-20 16:28:53.709802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.664 [2024-11-20 16:28:53.709808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.664 [2024-11-20 16:28:53.709814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.664 [2024-11-20 16:28:53.709828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.664 qpair failed and we were unable to recover it. 00:27:22.664 [2024-11-20 16:28:53.719826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.664 [2024-11-20 16:28:53.719876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.664 [2024-11-20 16:28:53.719890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.664 [2024-11-20 16:28:53.719897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.664 [2024-11-20 16:28:53.719903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.664 [2024-11-20 16:28:53.719917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.664 qpair failed and we were unable to recover it. 00:27:22.664 [2024-11-20 16:28:53.729858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.664 [2024-11-20 16:28:53.729912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.664 [2024-11-20 16:28:53.729926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.664 [2024-11-20 16:28:53.729932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.664 [2024-11-20 16:28:53.729938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.664 [2024-11-20 16:28:53.729952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.664 qpair failed and we were unable to recover it. 00:27:22.664 [2024-11-20 16:28:53.739883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.664 [2024-11-20 16:28:53.739938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.664 [2024-11-20 16:28:53.739950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.664 [2024-11-20 16:28:53.739957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.664 [2024-11-20 16:28:53.739962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.664 [2024-11-20 16:28:53.739976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.664 qpair failed and we were unable to recover it. 00:27:22.664 [2024-11-20 16:28:53.749921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.664 [2024-11-20 16:28:53.750018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.664 [2024-11-20 16:28:53.750031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.664 [2024-11-20 16:28:53.750037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.664 [2024-11-20 16:28:53.750043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.664 [2024-11-20 16:28:53.750057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.664 qpair failed and we were unable to recover it. 00:27:22.664 [2024-11-20 16:28:53.759968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.664 [2024-11-20 16:28:53.760068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.664 [2024-11-20 16:28:53.760085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.664 [2024-11-20 16:28:53.760091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.664 [2024-11-20 16:28:53.760097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.664 [2024-11-20 16:28:53.760111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.664 qpair failed and we were unable to recover it. 00:27:22.664 [2024-11-20 16:28:53.769966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.664 [2024-11-20 16:28:53.770027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.664 [2024-11-20 16:28:53.770041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.664 [2024-11-20 16:28:53.770048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.664 [2024-11-20 16:28:53.770053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.664 [2024-11-20 16:28:53.770068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.664 qpair failed and we were unable to recover it. 00:27:22.664 [2024-11-20 16:28:53.780011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.664 [2024-11-20 16:28:53.780095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.664 [2024-11-20 16:28:53.780108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.664 [2024-11-20 16:28:53.780114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.664 [2024-11-20 16:28:53.780119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.664 [2024-11-20 16:28:53.780134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.664 qpair failed and we were unable to recover it. 00:27:22.664 [2024-11-20 16:28:53.790007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.664 [2024-11-20 16:28:53.790099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.664 [2024-11-20 16:28:53.790112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.664 [2024-11-20 16:28:53.790119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.664 [2024-11-20 16:28:53.790124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.664 [2024-11-20 16:28:53.790138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.664 qpair failed and we were unable to recover it. 00:27:22.664 [2024-11-20 16:28:53.800065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.665 [2024-11-20 16:28:53.800142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.665 [2024-11-20 16:28:53.800155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.665 [2024-11-20 16:28:53.800162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.665 [2024-11-20 16:28:53.800170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.665 [2024-11-20 16:28:53.800185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.665 qpair failed and we were unable to recover it. 00:27:22.665 [2024-11-20 16:28:53.810043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.665 [2024-11-20 16:28:53.810100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.665 [2024-11-20 16:28:53.810114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.665 [2024-11-20 16:28:53.810122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.665 [2024-11-20 16:28:53.810128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.665 [2024-11-20 16:28:53.810144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.665 qpair failed and we were unable to recover it. 00:27:22.665 [2024-11-20 16:28:53.820137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.665 [2024-11-20 16:28:53.820211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.665 [2024-11-20 16:28:53.820223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.665 [2024-11-20 16:28:53.820230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.665 [2024-11-20 16:28:53.820236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.665 [2024-11-20 16:28:53.820251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.665 qpair failed and we were unable to recover it. 00:27:22.665 [2024-11-20 16:28:53.830140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.665 [2024-11-20 16:28:53.830192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.665 [2024-11-20 16:28:53.830210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.665 [2024-11-20 16:28:53.830216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.665 [2024-11-20 16:28:53.830222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.665 [2024-11-20 16:28:53.830236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.665 qpair failed and we were unable to recover it. 00:27:22.665 [2024-11-20 16:28:53.840206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.665 [2024-11-20 16:28:53.840262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.665 [2024-11-20 16:28:53.840276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.665 [2024-11-20 16:28:53.840282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.665 [2024-11-20 16:28:53.840288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.665 [2024-11-20 16:28:53.840302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.665 qpair failed and we were unable to recover it. 00:27:22.665 [2024-11-20 16:28:53.850211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.665 [2024-11-20 16:28:53.850268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.665 [2024-11-20 16:28:53.850281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.665 [2024-11-20 16:28:53.850287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.665 [2024-11-20 16:28:53.850293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.665 [2024-11-20 16:28:53.850308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.665 qpair failed and we were unable to recover it. 00:27:22.665 [2024-11-20 16:28:53.860242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.665 [2024-11-20 16:28:53.860295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.665 [2024-11-20 16:28:53.860308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.665 [2024-11-20 16:28:53.860314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.665 [2024-11-20 16:28:53.860320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.665 [2024-11-20 16:28:53.860335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.665 qpair failed and we were unable to recover it. 00:27:22.665 [2024-11-20 16:28:53.870263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.665 [2024-11-20 16:28:53.870312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.665 [2024-11-20 16:28:53.870325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.665 [2024-11-20 16:28:53.870331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.665 [2024-11-20 16:28:53.870337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.665 [2024-11-20 16:28:53.870351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.665 qpair failed and we were unable to recover it. 00:27:22.665 [2024-11-20 16:28:53.880292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.665 [2024-11-20 16:28:53.880343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.665 [2024-11-20 16:28:53.880355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.665 [2024-11-20 16:28:53.880362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.665 [2024-11-20 16:28:53.880368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.665 [2024-11-20 16:28:53.880382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.665 qpair failed and we were unable to recover it. 00:27:22.665 [2024-11-20 16:28:53.890328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.665 [2024-11-20 16:28:53.890383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.665 [2024-11-20 16:28:53.890398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.665 [2024-11-20 16:28:53.890404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.665 [2024-11-20 16:28:53.890410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.665 [2024-11-20 16:28:53.890424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.665 qpair failed and we were unable to recover it. 00:27:22.925 [2024-11-20 16:28:53.900375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.925 [2024-11-20 16:28:53.900445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.925 [2024-11-20 16:28:53.900458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.925 [2024-11-20 16:28:53.900464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.925 [2024-11-20 16:28:53.900470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.925 [2024-11-20 16:28:53.900485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.925 qpair failed and we were unable to recover it. 00:27:22.925 [2024-11-20 16:28:53.910388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.925 [2024-11-20 16:28:53.910439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.925 [2024-11-20 16:28:53.910451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.925 [2024-11-20 16:28:53.910457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.925 [2024-11-20 16:28:53.910463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.925 [2024-11-20 16:28:53.910477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.925 qpair failed and we were unable to recover it. 00:27:22.925 [2024-11-20 16:28:53.920412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.925 [2024-11-20 16:28:53.920466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.925 [2024-11-20 16:28:53.920478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.925 [2024-11-20 16:28:53.920484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.925 [2024-11-20 16:28:53.920490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.925 [2024-11-20 16:28:53.920504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.925 qpair failed and we were unable to recover it. 00:27:22.925 [2024-11-20 16:28:53.930456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.925 [2024-11-20 16:28:53.930511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.925 [2024-11-20 16:28:53.930524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.925 [2024-11-20 16:28:53.930530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.925 [2024-11-20 16:28:53.930538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.925 [2024-11-20 16:28:53.930553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.925 qpair failed and we were unable to recover it. 00:27:22.925 [2024-11-20 16:28:53.940481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.925 [2024-11-20 16:28:53.940536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.925 [2024-11-20 16:28:53.940549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.925 [2024-11-20 16:28:53.940555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.925 [2024-11-20 16:28:53.940561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.925 [2024-11-20 16:28:53.940574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.925 qpair failed and we were unable to recover it. 00:27:22.925 [2024-11-20 16:28:53.950503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.925 [2024-11-20 16:28:53.950558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.925 [2024-11-20 16:28:53.950570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.926 [2024-11-20 16:28:53.950576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.926 [2024-11-20 16:28:53.950582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.926 [2024-11-20 16:28:53.950596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.926 qpair failed and we were unable to recover it. 00:27:22.926 [2024-11-20 16:28:53.960529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.926 [2024-11-20 16:28:53.960579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.926 [2024-11-20 16:28:53.960592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.926 [2024-11-20 16:28:53.960598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.926 [2024-11-20 16:28:53.960603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.926 [2024-11-20 16:28:53.960618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.926 qpair failed and we were unable to recover it. 00:27:22.926 [2024-11-20 16:28:53.970553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.926 [2024-11-20 16:28:53.970612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.926 [2024-11-20 16:28:53.970624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.926 [2024-11-20 16:28:53.970630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.926 [2024-11-20 16:28:53.970636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.926 [2024-11-20 16:28:53.970650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.926 qpair failed and we were unable to recover it. 00:27:22.926 [2024-11-20 16:28:53.980584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.926 [2024-11-20 16:28:53.980638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.926 [2024-11-20 16:28:53.980650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.926 [2024-11-20 16:28:53.980657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.926 [2024-11-20 16:28:53.980662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.926 [2024-11-20 16:28:53.980676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.926 qpair failed and we were unable to recover it. 00:27:22.926 [2024-11-20 16:28:53.990614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.926 [2024-11-20 16:28:53.990666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.926 [2024-11-20 16:28:53.990679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.926 [2024-11-20 16:28:53.990685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.926 [2024-11-20 16:28:53.990691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.926 [2024-11-20 16:28:53.990705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.926 qpair failed and we were unable to recover it. 00:27:22.926 [2024-11-20 16:28:54.000655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.926 [2024-11-20 16:28:54.000708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.926 [2024-11-20 16:28:54.000721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.926 [2024-11-20 16:28:54.000727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.926 [2024-11-20 16:28:54.000733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.926 [2024-11-20 16:28:54.000747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.926 qpair failed and we were unable to recover it. 00:27:22.926 [2024-11-20 16:28:54.010601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.926 [2024-11-20 16:28:54.010656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.926 [2024-11-20 16:28:54.010668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.926 [2024-11-20 16:28:54.010675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.926 [2024-11-20 16:28:54.010681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.926 [2024-11-20 16:28:54.010695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.926 qpair failed and we were unable to recover it. 00:27:22.926 [2024-11-20 16:28:54.020727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.926 [2024-11-20 16:28:54.020802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.926 [2024-11-20 16:28:54.020819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.926 [2024-11-20 16:28:54.020826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.926 [2024-11-20 16:28:54.020832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.926 [2024-11-20 16:28:54.020847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.926 qpair failed and we were unable to recover it. 00:27:22.926 [2024-11-20 16:28:54.030718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.926 [2024-11-20 16:28:54.030772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.926 [2024-11-20 16:28:54.030786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.926 [2024-11-20 16:28:54.030792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.926 [2024-11-20 16:28:54.030798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.926 [2024-11-20 16:28:54.030812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.926 qpair failed and we were unable to recover it. 00:27:22.926 [2024-11-20 16:28:54.040693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.926 [2024-11-20 16:28:54.040776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.926 [2024-11-20 16:28:54.040789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.926 [2024-11-20 16:28:54.040796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.926 [2024-11-20 16:28:54.040802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.926 [2024-11-20 16:28:54.040817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.926 qpair failed and we were unable to recover it. 00:27:22.926 [2024-11-20 16:28:54.050810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.926 [2024-11-20 16:28:54.050868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.926 [2024-11-20 16:28:54.050880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.926 [2024-11-20 16:28:54.050887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.926 [2024-11-20 16:28:54.050892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.926 [2024-11-20 16:28:54.050908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.926 qpair failed and we were unable to recover it. 00:27:22.926 [2024-11-20 16:28:54.060787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.926 [2024-11-20 16:28:54.060860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.926 [2024-11-20 16:28:54.060873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.926 [2024-11-20 16:28:54.060883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.926 [2024-11-20 16:28:54.060888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.926 [2024-11-20 16:28:54.060902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.926 qpair failed and we were unable to recover it. 00:27:22.926 [2024-11-20 16:28:54.070788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.926 [2024-11-20 16:28:54.070842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.927 [2024-11-20 16:28:54.070854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.927 [2024-11-20 16:28:54.070860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.927 [2024-11-20 16:28:54.070866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.927 [2024-11-20 16:28:54.070881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.927 qpair failed and we were unable to recover it. 00:27:22.927 [2024-11-20 16:28:54.080856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.927 [2024-11-20 16:28:54.080905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.927 [2024-11-20 16:28:54.080918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.927 [2024-11-20 16:28:54.080924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.927 [2024-11-20 16:28:54.080930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.927 [2024-11-20 16:28:54.080945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.927 qpair failed and we were unable to recover it. 00:27:22.927 [2024-11-20 16:28:54.090850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.927 [2024-11-20 16:28:54.090926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.927 [2024-11-20 16:28:54.090939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.927 [2024-11-20 16:28:54.090946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.927 [2024-11-20 16:28:54.090951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.927 [2024-11-20 16:28:54.090965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.927 qpair failed and we were unable to recover it. 00:27:22.927 [2024-11-20 16:28:54.100931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.927 [2024-11-20 16:28:54.100986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.927 [2024-11-20 16:28:54.100998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.927 [2024-11-20 16:28:54.101004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.927 [2024-11-20 16:28:54.101011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.927 [2024-11-20 16:28:54.101028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.927 qpair failed and we were unable to recover it. 00:27:22.927 [2024-11-20 16:28:54.110963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.927 [2024-11-20 16:28:54.111024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.927 [2024-11-20 16:28:54.111037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.927 [2024-11-20 16:28:54.111043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.927 [2024-11-20 16:28:54.111049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.927 [2024-11-20 16:28:54.111064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.927 qpair failed and we were unable to recover it. 00:27:22.927 [2024-11-20 16:28:54.121035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.927 [2024-11-20 16:28:54.121089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.927 [2024-11-20 16:28:54.121102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.927 [2024-11-20 16:28:54.121108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.927 [2024-11-20 16:28:54.121114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.927 [2024-11-20 16:28:54.121129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.927 qpair failed and we were unable to recover it. 00:27:22.927 [2024-11-20 16:28:54.131036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.927 [2024-11-20 16:28:54.131117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.927 [2024-11-20 16:28:54.131131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.927 [2024-11-20 16:28:54.131137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.927 [2024-11-20 16:28:54.131143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.927 [2024-11-20 16:28:54.131158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.927 qpair failed and we were unable to recover it. 00:27:22.927 [2024-11-20 16:28:54.141055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.927 [2024-11-20 16:28:54.141106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.927 [2024-11-20 16:28:54.141119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.927 [2024-11-20 16:28:54.141125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.927 [2024-11-20 16:28:54.141132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.927 [2024-11-20 16:28:54.141146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.927 qpair failed and we were unable to recover it. 00:27:22.927 [2024-11-20 16:28:54.151083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.927 [2024-11-20 16:28:54.151142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.927 [2024-11-20 16:28:54.151156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.927 [2024-11-20 16:28:54.151162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.927 [2024-11-20 16:28:54.151168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:22.927 [2024-11-20 16:28:54.151182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.927 qpair failed and we were unable to recover it. 00:27:23.187 [2024-11-20 16:28:54.161044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.187 [2024-11-20 16:28:54.161094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.187 [2024-11-20 16:28:54.161107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.187 [2024-11-20 16:28:54.161113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.187 [2024-11-20 16:28:54.161119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.187 [2024-11-20 16:28:54.161133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.187 qpair failed and we were unable to recover it. 00:27:23.187 [2024-11-20 16:28:54.171143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.187 [2024-11-20 16:28:54.171200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.187 [2024-11-20 16:28:54.171216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.187 [2024-11-20 16:28:54.171222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.187 [2024-11-20 16:28:54.171228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.187 [2024-11-20 16:28:54.171242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.187 qpair failed and we were unable to recover it. 00:27:23.187 [2024-11-20 16:28:54.181167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.187 [2024-11-20 16:28:54.181220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.187 [2024-11-20 16:28:54.181233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.187 [2024-11-20 16:28:54.181239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.187 [2024-11-20 16:28:54.181245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.187 [2024-11-20 16:28:54.181260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.187 qpair failed and we were unable to recover it. 00:27:23.187 [2024-11-20 16:28:54.191227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.187 [2024-11-20 16:28:54.191276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.187 [2024-11-20 16:28:54.191289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.187 [2024-11-20 16:28:54.191299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.187 [2024-11-20 16:28:54.191304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.187 [2024-11-20 16:28:54.191319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.187 qpair failed and we were unable to recover it. 00:27:23.187 [2024-11-20 16:28:54.201199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.187 [2024-11-20 16:28:54.201254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.187 [2024-11-20 16:28:54.201267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.187 [2024-11-20 16:28:54.201274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.187 [2024-11-20 16:28:54.201280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.187 [2024-11-20 16:28:54.201297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.187 qpair failed and we were unable to recover it. 00:27:23.187 [2024-11-20 16:28:54.211291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.187 [2024-11-20 16:28:54.211352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.187 [2024-11-20 16:28:54.211364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.187 [2024-11-20 16:28:54.211371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.187 [2024-11-20 16:28:54.211377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.187 [2024-11-20 16:28:54.211391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.187 qpair failed and we were unable to recover it. 00:27:23.187 [2024-11-20 16:28:54.221272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.187 [2024-11-20 16:28:54.221325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.187 [2024-11-20 16:28:54.221338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.187 [2024-11-20 16:28:54.221344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.187 [2024-11-20 16:28:54.221350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.187 [2024-11-20 16:28:54.221364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.187 qpair failed and we were unable to recover it. 00:27:23.187 [2024-11-20 16:28:54.231327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.187 [2024-11-20 16:28:54.231376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.187 [2024-11-20 16:28:54.231390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.187 [2024-11-20 16:28:54.231396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.187 [2024-11-20 16:28:54.231402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.188 [2024-11-20 16:28:54.231420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.188 qpair failed and we were unable to recover it. 00:27:23.188 [2024-11-20 16:28:54.241324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.188 [2024-11-20 16:28:54.241376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.188 [2024-11-20 16:28:54.241388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.188 [2024-11-20 16:28:54.241395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.188 [2024-11-20 16:28:54.241401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.188 [2024-11-20 16:28:54.241416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.188 qpair failed and we were unable to recover it. 00:27:23.188 [2024-11-20 16:28:54.251372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.188 [2024-11-20 16:28:54.251475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.188 [2024-11-20 16:28:54.251488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.188 [2024-11-20 16:28:54.251494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.188 [2024-11-20 16:28:54.251500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.188 [2024-11-20 16:28:54.251515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.188 qpair failed and we were unable to recover it. 00:27:23.188 [2024-11-20 16:28:54.261370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.188 [2024-11-20 16:28:54.261428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.188 [2024-11-20 16:28:54.261441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.188 [2024-11-20 16:28:54.261448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.188 [2024-11-20 16:28:54.261453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.188 [2024-11-20 16:28:54.261468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.188 qpair failed and we were unable to recover it. 00:27:23.188 [2024-11-20 16:28:54.271398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.188 [2024-11-20 16:28:54.271453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.188 [2024-11-20 16:28:54.271466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.188 [2024-11-20 16:28:54.271472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.188 [2024-11-20 16:28:54.271478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.188 [2024-11-20 16:28:54.271492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.188 qpair failed and we were unable to recover it. 00:27:23.188 [2024-11-20 16:28:54.281440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.188 [2024-11-20 16:28:54.281492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.188 [2024-11-20 16:28:54.281505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.188 [2024-11-20 16:28:54.281512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.188 [2024-11-20 16:28:54.281518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.188 [2024-11-20 16:28:54.281532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.188 qpair failed and we were unable to recover it. 00:27:23.188 [2024-11-20 16:28:54.291428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.188 [2024-11-20 16:28:54.291485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.188 [2024-11-20 16:28:54.291498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.188 [2024-11-20 16:28:54.291504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.188 [2024-11-20 16:28:54.291510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.188 [2024-11-20 16:28:54.291523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.188 qpair failed and we were unable to recover it. 00:27:23.188 [2024-11-20 16:28:54.301480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.188 [2024-11-20 16:28:54.301533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.188 [2024-11-20 16:28:54.301546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.188 [2024-11-20 16:28:54.301552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.188 [2024-11-20 16:28:54.301558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.188 [2024-11-20 16:28:54.301572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.188 qpair failed and we were unable to recover it. 00:27:23.188 [2024-11-20 16:28:54.311495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.188 [2024-11-20 16:28:54.311547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.188 [2024-11-20 16:28:54.311560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.188 [2024-11-20 16:28:54.311566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.188 [2024-11-20 16:28:54.311572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.188 [2024-11-20 16:28:54.311586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.188 qpair failed and we were unable to recover it. 00:27:23.188 [2024-11-20 16:28:54.321484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.188 [2024-11-20 16:28:54.321564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.188 [2024-11-20 16:28:54.321580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.188 [2024-11-20 16:28:54.321586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.188 [2024-11-20 16:28:54.321592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.188 [2024-11-20 16:28:54.321606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.188 qpair failed and we were unable to recover it. 00:27:23.188 [2024-11-20 16:28:54.331571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.188 [2024-11-20 16:28:54.331657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.188 [2024-11-20 16:28:54.331669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.188 [2024-11-20 16:28:54.331676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.188 [2024-11-20 16:28:54.331681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.188 [2024-11-20 16:28:54.331695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.188 qpair failed and we were unable to recover it. 00:27:23.188 [2024-11-20 16:28:54.341614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.188 [2024-11-20 16:28:54.341669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.188 [2024-11-20 16:28:54.341681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.188 [2024-11-20 16:28:54.341688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.188 [2024-11-20 16:28:54.341693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.188 [2024-11-20 16:28:54.341707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.188 qpair failed and we were unable to recover it. 00:27:23.188 [2024-11-20 16:28:54.351679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.188 [2024-11-20 16:28:54.351731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.188 [2024-11-20 16:28:54.351744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.188 [2024-11-20 16:28:54.351750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.188 [2024-11-20 16:28:54.351756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.188 [2024-11-20 16:28:54.351770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.188 qpair failed and we were unable to recover it. 00:27:23.188 [2024-11-20 16:28:54.361668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.188 [2024-11-20 16:28:54.361720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.188 [2024-11-20 16:28:54.361733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.188 [2024-11-20 16:28:54.361739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.188 [2024-11-20 16:28:54.361748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.189 [2024-11-20 16:28:54.361762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.189 qpair failed and we were unable to recover it. 00:27:23.189 [2024-11-20 16:28:54.371655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.189 [2024-11-20 16:28:54.371722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.189 [2024-11-20 16:28:54.371734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.189 [2024-11-20 16:28:54.371741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.189 [2024-11-20 16:28:54.371746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.189 [2024-11-20 16:28:54.371760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.189 qpair failed and we were unable to recover it. 00:27:23.189 [2024-11-20 16:28:54.381671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.189 [2024-11-20 16:28:54.381720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.189 [2024-11-20 16:28:54.381733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.189 [2024-11-20 16:28:54.381739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.189 [2024-11-20 16:28:54.381744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.189 [2024-11-20 16:28:54.381759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.189 qpair failed and we were unable to recover it. 00:27:23.189 [2024-11-20 16:28:54.391762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.189 [2024-11-20 16:28:54.391815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.189 [2024-11-20 16:28:54.391828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.189 [2024-11-20 16:28:54.391834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.189 [2024-11-20 16:28:54.391839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.189 [2024-11-20 16:28:54.391854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.189 qpair failed and we were unable to recover it. 00:27:23.189 [2024-11-20 16:28:54.401802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.189 [2024-11-20 16:28:54.401852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.189 [2024-11-20 16:28:54.401865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.189 [2024-11-20 16:28:54.401871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.189 [2024-11-20 16:28:54.401877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.189 [2024-11-20 16:28:54.401892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.189 qpair failed and we were unable to recover it. 00:27:23.189 [2024-11-20 16:28:54.411822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.189 [2024-11-20 16:28:54.411874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.189 [2024-11-20 16:28:54.411887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.189 [2024-11-20 16:28:54.411893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.189 [2024-11-20 16:28:54.411899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.189 [2024-11-20 16:28:54.411913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.189 qpair failed and we were unable to recover it. 00:27:23.449 [2024-11-20 16:28:54.421782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.449 [2024-11-20 16:28:54.421852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.449 [2024-11-20 16:28:54.421865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.449 [2024-11-20 16:28:54.421871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.449 [2024-11-20 16:28:54.421877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.449 [2024-11-20 16:28:54.421892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.449 qpair failed and we were unable to recover it. 00:27:23.449 [2024-11-20 16:28:54.431834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.449 [2024-11-20 16:28:54.431917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.449 [2024-11-20 16:28:54.431931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.449 [2024-11-20 16:28:54.431937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.449 [2024-11-20 16:28:54.431943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.449 [2024-11-20 16:28:54.431958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.449 qpair failed and we were unable to recover it. 00:27:23.449 [2024-11-20 16:28:54.441956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.449 [2024-11-20 16:28:54.442016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.449 [2024-11-20 16:28:54.442029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.449 [2024-11-20 16:28:54.442035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.449 [2024-11-20 16:28:54.442041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.449 [2024-11-20 16:28:54.442055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.449 qpair failed and we were unable to recover it. 00:27:23.449 [2024-11-20 16:28:54.451950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.449 [2024-11-20 16:28:54.452012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.449 [2024-11-20 16:28:54.452030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.449 [2024-11-20 16:28:54.452037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.449 [2024-11-20 16:28:54.452042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.449 [2024-11-20 16:28:54.452058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.449 qpair failed and we were unable to recover it. 00:27:23.449 [2024-11-20 16:28:54.461983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.449 [2024-11-20 16:28:54.462053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.449 [2024-11-20 16:28:54.462066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.449 [2024-11-20 16:28:54.462072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.449 [2024-11-20 16:28:54.462078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.449 [2024-11-20 16:28:54.462092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.449 qpair failed and we were unable to recover it. 00:27:23.449 [2024-11-20 16:28:54.472009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.449 [2024-11-20 16:28:54.472073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.449 [2024-11-20 16:28:54.472086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.449 [2024-11-20 16:28:54.472093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.449 [2024-11-20 16:28:54.472099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.449 [2024-11-20 16:28:54.472113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.449 qpair failed and we were unable to recover it. 00:27:23.449 [2024-11-20 16:28:54.482064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.449 [2024-11-20 16:28:54.482121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.449 [2024-11-20 16:28:54.482134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.449 [2024-11-20 16:28:54.482141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.449 [2024-11-20 16:28:54.482147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.449 [2024-11-20 16:28:54.482161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.449 qpair failed and we were unable to recover it. 00:27:23.449 [2024-11-20 16:28:54.492093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.449 [2024-11-20 16:28:54.492167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.449 [2024-11-20 16:28:54.492180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.449 [2024-11-20 16:28:54.492186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.450 [2024-11-20 16:28:54.492196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.450 [2024-11-20 16:28:54.492215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.450 qpair failed and we were unable to recover it. 00:27:23.450 [2024-11-20 16:28:54.502087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.450 [2024-11-20 16:28:54.502143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.450 [2024-11-20 16:28:54.502156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.450 [2024-11-20 16:28:54.502162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.450 [2024-11-20 16:28:54.502168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.450 [2024-11-20 16:28:54.502182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.450 qpair failed and we were unable to recover it. 00:27:23.450 [2024-11-20 16:28:54.512151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.450 [2024-11-20 16:28:54.512211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.450 [2024-11-20 16:28:54.512224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.450 [2024-11-20 16:28:54.512231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.450 [2024-11-20 16:28:54.512237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.450 [2024-11-20 16:28:54.512251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.450 qpair failed and we were unable to recover it. 00:27:23.450 [2024-11-20 16:28:54.522143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.450 [2024-11-20 16:28:54.522191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.450 [2024-11-20 16:28:54.522212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.450 [2024-11-20 16:28:54.522219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.450 [2024-11-20 16:28:54.522225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.450 [2024-11-20 16:28:54.522240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.450 qpair failed and we were unable to recover it. 00:27:23.450 [2024-11-20 16:28:54.532169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.450 [2024-11-20 16:28:54.532229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.450 [2024-11-20 16:28:54.532242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.450 [2024-11-20 16:28:54.532249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.450 [2024-11-20 16:28:54.532255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.450 [2024-11-20 16:28:54.532269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.450 qpair failed and we were unable to recover it. 00:27:23.450 [2024-11-20 16:28:54.542220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.450 [2024-11-20 16:28:54.542328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.450 [2024-11-20 16:28:54.542341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.450 [2024-11-20 16:28:54.542347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.450 [2024-11-20 16:28:54.542353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.450 [2024-11-20 16:28:54.542368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.450 qpair failed and we were unable to recover it. 00:27:23.450 [2024-11-20 16:28:54.552152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.450 [2024-11-20 16:28:54.552205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.450 [2024-11-20 16:28:54.552219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.450 [2024-11-20 16:28:54.552225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.450 [2024-11-20 16:28:54.552231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.450 [2024-11-20 16:28:54.552245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.450 qpair failed and we were unable to recover it. 00:27:23.450 [2024-11-20 16:28:54.562241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.450 [2024-11-20 16:28:54.562296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.450 [2024-11-20 16:28:54.562308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.450 [2024-11-20 16:28:54.562314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.450 [2024-11-20 16:28:54.562320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.450 [2024-11-20 16:28:54.562334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.450 qpair failed and we were unable to recover it. 00:27:23.450 [2024-11-20 16:28:54.572278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.450 [2024-11-20 16:28:54.572333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.450 [2024-11-20 16:28:54.572345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.450 [2024-11-20 16:28:54.572352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.450 [2024-11-20 16:28:54.572358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.450 [2024-11-20 16:28:54.572372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.450 qpair failed and we were unable to recover it. 00:27:23.450 [2024-11-20 16:28:54.582303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.450 [2024-11-20 16:28:54.582360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.450 [2024-11-20 16:28:54.582375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.450 [2024-11-20 16:28:54.582382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.450 [2024-11-20 16:28:54.582388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.450 [2024-11-20 16:28:54.582402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.450 qpair failed and we were unable to recover it. 00:27:23.450 [2024-11-20 16:28:54.592334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.450 [2024-11-20 16:28:54.592394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.450 [2024-11-20 16:28:54.592406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.450 [2024-11-20 16:28:54.592413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.450 [2024-11-20 16:28:54.592418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.450 [2024-11-20 16:28:54.592432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.450 qpair failed and we were unable to recover it. 00:27:23.450 [2024-11-20 16:28:54.602292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.450 [2024-11-20 16:28:54.602377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.450 [2024-11-20 16:28:54.602390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.450 [2024-11-20 16:28:54.602396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.450 [2024-11-20 16:28:54.602402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.450 [2024-11-20 16:28:54.602415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.450 qpair failed and we were unable to recover it. 00:27:23.450 [2024-11-20 16:28:54.612442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.450 [2024-11-20 16:28:54.612493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.450 [2024-11-20 16:28:54.612506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.450 [2024-11-20 16:28:54.612512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.450 [2024-11-20 16:28:54.612518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.450 [2024-11-20 16:28:54.612533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.450 qpair failed and we were unable to recover it. 00:27:23.450 [2024-11-20 16:28:54.622415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.450 [2024-11-20 16:28:54.622465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.450 [2024-11-20 16:28:54.622478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.450 [2024-11-20 16:28:54.622487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.451 [2024-11-20 16:28:54.622492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.451 [2024-11-20 16:28:54.622506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.451 qpair failed and we were unable to recover it. 00:27:23.451 [2024-11-20 16:28:54.632444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.451 [2024-11-20 16:28:54.632492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.451 [2024-11-20 16:28:54.632505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.451 [2024-11-20 16:28:54.632511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.451 [2024-11-20 16:28:54.632517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.451 [2024-11-20 16:28:54.632532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.451 qpair failed and we were unable to recover it. 00:27:23.451 [2024-11-20 16:28:54.642466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.451 [2024-11-20 16:28:54.642515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.451 [2024-11-20 16:28:54.642529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.451 [2024-11-20 16:28:54.642535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.451 [2024-11-20 16:28:54.642541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.451 [2024-11-20 16:28:54.642556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.451 qpair failed and we were unable to recover it. 00:27:23.451 [2024-11-20 16:28:54.652506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.451 [2024-11-20 16:28:54.652561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.451 [2024-11-20 16:28:54.652574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.451 [2024-11-20 16:28:54.652580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.451 [2024-11-20 16:28:54.652586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.451 [2024-11-20 16:28:54.652600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.451 qpair failed and we were unable to recover it. 00:27:23.451 [2024-11-20 16:28:54.662529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.451 [2024-11-20 16:28:54.662579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.451 [2024-11-20 16:28:54.662592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.451 [2024-11-20 16:28:54.662598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.451 [2024-11-20 16:28:54.662603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.451 [2024-11-20 16:28:54.662621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.451 qpair failed and we were unable to recover it. 00:27:23.451 [2024-11-20 16:28:54.672554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.451 [2024-11-20 16:28:54.672605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.451 [2024-11-20 16:28:54.672618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.451 [2024-11-20 16:28:54.672624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.451 [2024-11-20 16:28:54.672630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.451 [2024-11-20 16:28:54.672645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.451 qpair failed and we were unable to recover it. 00:27:23.711 [2024-11-20 16:28:54.682580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.711 [2024-11-20 16:28:54.682627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.711 [2024-11-20 16:28:54.682640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.711 [2024-11-20 16:28:54.682646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.711 [2024-11-20 16:28:54.682652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.711 [2024-11-20 16:28:54.682666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.711 qpair failed and we were unable to recover it. 00:27:23.711 [2024-11-20 16:28:54.692612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.711 [2024-11-20 16:28:54.692671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.711 [2024-11-20 16:28:54.692684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.711 [2024-11-20 16:28:54.692690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.711 [2024-11-20 16:28:54.692696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.711 [2024-11-20 16:28:54.692710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.711 qpair failed and we were unable to recover it. 00:27:23.711 [2024-11-20 16:28:54.702645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.711 [2024-11-20 16:28:54.702699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.711 [2024-11-20 16:28:54.702712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.711 [2024-11-20 16:28:54.702718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.711 [2024-11-20 16:28:54.702723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.711 [2024-11-20 16:28:54.702737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.711 qpair failed and we were unable to recover it. 00:27:23.711 [2024-11-20 16:28:54.712672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.711 [2024-11-20 16:28:54.712742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.711 [2024-11-20 16:28:54.712754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.711 [2024-11-20 16:28:54.712761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.711 [2024-11-20 16:28:54.712766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.711 [2024-11-20 16:28:54.712781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.711 qpair failed and we were unable to recover it. 00:27:23.711 [2024-11-20 16:28:54.722690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.711 [2024-11-20 16:28:54.722777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.711 [2024-11-20 16:28:54.722791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.711 [2024-11-20 16:28:54.722798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.711 [2024-11-20 16:28:54.722804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.711 [2024-11-20 16:28:54.722818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.711 qpair failed and we were unable to recover it. 00:27:23.711 [2024-11-20 16:28:54.732739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.711 [2024-11-20 16:28:54.732811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.711 [2024-11-20 16:28:54.732824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.711 [2024-11-20 16:28:54.732830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.711 [2024-11-20 16:28:54.732836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.712 [2024-11-20 16:28:54.732851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.712 qpair failed and we were unable to recover it. 00:27:23.712 [2024-11-20 16:28:54.742759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.712 [2024-11-20 16:28:54.742825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.712 [2024-11-20 16:28:54.742837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.712 [2024-11-20 16:28:54.742844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.712 [2024-11-20 16:28:54.742850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.712 [2024-11-20 16:28:54.742864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.712 qpair failed and we were unable to recover it. 00:27:23.712 [2024-11-20 16:28:54.752840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.712 [2024-11-20 16:28:54.752940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.712 [2024-11-20 16:28:54.752952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.712 [2024-11-20 16:28:54.752962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.712 [2024-11-20 16:28:54.752968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.712 [2024-11-20 16:28:54.752982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.712 qpair failed and we were unable to recover it. 00:27:23.712 [2024-11-20 16:28:54.762807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.712 [2024-11-20 16:28:54.762877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.712 [2024-11-20 16:28:54.762890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.712 [2024-11-20 16:28:54.762897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.712 [2024-11-20 16:28:54.762902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.712 [2024-11-20 16:28:54.762918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.712 qpair failed and we were unable to recover it. 00:27:23.712 [2024-11-20 16:28:54.772805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.712 [2024-11-20 16:28:54.772863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.712 [2024-11-20 16:28:54.772877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.712 [2024-11-20 16:28:54.772884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.712 [2024-11-20 16:28:54.772890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.712 [2024-11-20 16:28:54.772904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.712 qpair failed and we were unable to recover it. 00:27:23.712 [2024-11-20 16:28:54.782868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.712 [2024-11-20 16:28:54.782928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.712 [2024-11-20 16:28:54.782940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.712 [2024-11-20 16:28:54.782946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.712 [2024-11-20 16:28:54.782951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.712 [2024-11-20 16:28:54.782965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.712 qpair failed and we were unable to recover it. 00:27:23.712 [2024-11-20 16:28:54.792934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.712 [2024-11-20 16:28:54.792988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.712 [2024-11-20 16:28:54.793000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.712 [2024-11-20 16:28:54.793006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.712 [2024-11-20 16:28:54.793012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.712 [2024-11-20 16:28:54.793029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.712 qpair failed and we were unable to recover it. 00:27:23.712 [2024-11-20 16:28:54.802926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.712 [2024-11-20 16:28:54.802981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.712 [2024-11-20 16:28:54.802994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.712 [2024-11-20 16:28:54.803000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.712 [2024-11-20 16:28:54.803006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.712 [2024-11-20 16:28:54.803020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.712 qpair failed and we were unable to recover it. 00:27:23.712 [2024-11-20 16:28:54.812969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.712 [2024-11-20 16:28:54.813024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.712 [2024-11-20 16:28:54.813037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.712 [2024-11-20 16:28:54.813043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.712 [2024-11-20 16:28:54.813049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.712 [2024-11-20 16:28:54.813063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.712 qpair failed and we were unable to recover it. 00:27:23.712 [2024-11-20 16:28:54.822987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.712 [2024-11-20 16:28:54.823070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.712 [2024-11-20 16:28:54.823082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.712 [2024-11-20 16:28:54.823089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.712 [2024-11-20 16:28:54.823094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.712 [2024-11-20 16:28:54.823109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.712 qpair failed and we were unable to recover it. 00:27:23.712 [2024-11-20 16:28:54.833009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.712 [2024-11-20 16:28:54.833061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.712 [2024-11-20 16:28:54.833074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.712 [2024-11-20 16:28:54.833080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.712 [2024-11-20 16:28:54.833086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.713 [2024-11-20 16:28:54.833100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.713 qpair failed and we were unable to recover it. 00:27:23.713 [2024-11-20 16:28:54.843023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.713 [2024-11-20 16:28:54.843075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.713 [2024-11-20 16:28:54.843088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.713 [2024-11-20 16:28:54.843095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.713 [2024-11-20 16:28:54.843100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.713 [2024-11-20 16:28:54.843115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.713 qpair failed and we were unable to recover it. 00:27:23.713 [2024-11-20 16:28:54.852989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.713 [2024-11-20 16:28:54.853050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.713 [2024-11-20 16:28:54.853064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.713 [2024-11-20 16:28:54.853071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.713 [2024-11-20 16:28:54.853077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.713 [2024-11-20 16:28:54.853092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.713 qpair failed and we were unable to recover it. 00:27:23.713 [2024-11-20 16:28:54.863102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.713 [2024-11-20 16:28:54.863169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.713 [2024-11-20 16:28:54.863183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.713 [2024-11-20 16:28:54.863190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.713 [2024-11-20 16:28:54.863196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.713 [2024-11-20 16:28:54.863213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.713 qpair failed and we were unable to recover it. 00:27:23.713 [2024-11-20 16:28:54.873103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.713 [2024-11-20 16:28:54.873156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.713 [2024-11-20 16:28:54.873169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.713 [2024-11-20 16:28:54.873175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.713 [2024-11-20 16:28:54.873182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.713 [2024-11-20 16:28:54.873196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.713 qpair failed and we were unable to recover it. 00:27:23.713 [2024-11-20 16:28:54.883133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.713 [2024-11-20 16:28:54.883206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.713 [2024-11-20 16:28:54.883222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.713 [2024-11-20 16:28:54.883229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.713 [2024-11-20 16:28:54.883236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.713 [2024-11-20 16:28:54.883250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.713 qpair failed and we were unable to recover it. 00:27:23.713 [2024-11-20 16:28:54.893173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.713 [2024-11-20 16:28:54.893235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.713 [2024-11-20 16:28:54.893248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.713 [2024-11-20 16:28:54.893254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.713 [2024-11-20 16:28:54.893260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.713 [2024-11-20 16:28:54.893274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.713 qpair failed and we were unable to recover it. 00:27:23.713 [2024-11-20 16:28:54.903128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.713 [2024-11-20 16:28:54.903190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.713 [2024-11-20 16:28:54.903207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.713 [2024-11-20 16:28:54.903214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.713 [2024-11-20 16:28:54.903220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.713 [2024-11-20 16:28:54.903235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.713 qpair failed and we were unable to recover it. 00:27:23.713 [2024-11-20 16:28:54.913204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.713 [2024-11-20 16:28:54.913257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.713 [2024-11-20 16:28:54.913269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.713 [2024-11-20 16:28:54.913276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.713 [2024-11-20 16:28:54.913281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.713 [2024-11-20 16:28:54.913296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.713 qpair failed and we were unable to recover it. 00:27:23.713 [2024-11-20 16:28:54.923290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.713 [2024-11-20 16:28:54.923343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.713 [2024-11-20 16:28:54.923357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.713 [2024-11-20 16:28:54.923363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.713 [2024-11-20 16:28:54.923372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.713 [2024-11-20 16:28:54.923386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.713 qpair failed and we were unable to recover it. 00:27:23.713 [2024-11-20 16:28:54.933290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.713 [2024-11-20 16:28:54.933346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.713 [2024-11-20 16:28:54.933358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.713 [2024-11-20 16:28:54.933365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.713 [2024-11-20 16:28:54.933370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.713 [2024-11-20 16:28:54.933384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.713 qpair failed and we were unable to recover it. 00:27:23.973 [2024-11-20 16:28:54.943318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.973 [2024-11-20 16:28:54.943377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.973 [2024-11-20 16:28:54.943389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.973 [2024-11-20 16:28:54.943395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.973 [2024-11-20 16:28:54.943401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.973 [2024-11-20 16:28:54.943415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-11-20 16:28:54.953341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.973 [2024-11-20 16:28:54.953408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.973 [2024-11-20 16:28:54.953421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.973 [2024-11-20 16:28:54.953427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.973 [2024-11-20 16:28:54.953433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.973 [2024-11-20 16:28:54.953448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-11-20 16:28:54.963305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.973 [2024-11-20 16:28:54.963404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.973 [2024-11-20 16:28:54.963417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.973 [2024-11-20 16:28:54.963423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.973 [2024-11-20 16:28:54.963429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.973 [2024-11-20 16:28:54.963443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-11-20 16:28:54.973404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.973 [2024-11-20 16:28:54.973459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.973 [2024-11-20 16:28:54.973471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.973 [2024-11-20 16:28:54.973478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.973 [2024-11-20 16:28:54.973483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.973 [2024-11-20 16:28:54.973498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-11-20 16:28:54.983440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.973 [2024-11-20 16:28:54.983523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.973 [2024-11-20 16:28:54.983536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.974 [2024-11-20 16:28:54.983542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.974 [2024-11-20 16:28:54.983548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.974 [2024-11-20 16:28:54.983562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-11-20 16:28:54.993468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.974 [2024-11-20 16:28:54.993524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.974 [2024-11-20 16:28:54.993536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.974 [2024-11-20 16:28:54.993542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.974 [2024-11-20 16:28:54.993548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.974 [2024-11-20 16:28:54.993562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-11-20 16:28:55.003489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.974 [2024-11-20 16:28:55.003539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.974 [2024-11-20 16:28:55.003552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.974 [2024-11-20 16:28:55.003558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.974 [2024-11-20 16:28:55.003564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.974 [2024-11-20 16:28:55.003578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-11-20 16:28:55.013517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.974 [2024-11-20 16:28:55.013571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.974 [2024-11-20 16:28:55.013587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.974 [2024-11-20 16:28:55.013594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.974 [2024-11-20 16:28:55.013600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.974 [2024-11-20 16:28:55.013614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-11-20 16:28:55.023579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.974 [2024-11-20 16:28:55.023640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.974 [2024-11-20 16:28:55.023653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.974 [2024-11-20 16:28:55.023660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.974 [2024-11-20 16:28:55.023665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.974 [2024-11-20 16:28:55.023680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-11-20 16:28:55.033567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.974 [2024-11-20 16:28:55.033620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.974 [2024-11-20 16:28:55.033633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.974 [2024-11-20 16:28:55.033639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.974 [2024-11-20 16:28:55.033645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.974 [2024-11-20 16:28:55.033659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-11-20 16:28:55.043586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.974 [2024-11-20 16:28:55.043638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.974 [2024-11-20 16:28:55.043650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.974 [2024-11-20 16:28:55.043657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.974 [2024-11-20 16:28:55.043663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.974 [2024-11-20 16:28:55.043677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-11-20 16:28:55.053627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.974 [2024-11-20 16:28:55.053734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.974 [2024-11-20 16:28:55.053746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.974 [2024-11-20 16:28:55.053753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.974 [2024-11-20 16:28:55.053762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.974 [2024-11-20 16:28:55.053777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-11-20 16:28:55.063583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.974 [2024-11-20 16:28:55.063647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.974 [2024-11-20 16:28:55.063661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.974 [2024-11-20 16:28:55.063667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.974 [2024-11-20 16:28:55.063673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.974 [2024-11-20 16:28:55.063688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-11-20 16:28:55.073674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.974 [2024-11-20 16:28:55.073727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.974 [2024-11-20 16:28:55.073739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.974 [2024-11-20 16:28:55.073746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.974 [2024-11-20 16:28:55.073752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.974 [2024-11-20 16:28:55.073766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-11-20 16:28:55.083693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.974 [2024-11-20 16:28:55.083794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.974 [2024-11-20 16:28:55.083807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.974 [2024-11-20 16:28:55.083813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.974 [2024-11-20 16:28:55.083819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.974 [2024-11-20 16:28:55.083833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-11-20 16:28:55.093755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.974 [2024-11-20 16:28:55.093828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.974 [2024-11-20 16:28:55.093841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.974 [2024-11-20 16:28:55.093847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.974 [2024-11-20 16:28:55.093853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.974 [2024-11-20 16:28:55.093867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-11-20 16:28:55.103816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.974 [2024-11-20 16:28:55.103870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.974 [2024-11-20 16:28:55.103883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.974 [2024-11-20 16:28:55.103889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.975 [2024-11-20 16:28:55.103895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.975 [2024-11-20 16:28:55.103910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-11-20 16:28:55.113786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.975 [2024-11-20 16:28:55.113847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.975 [2024-11-20 16:28:55.113860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.975 [2024-11-20 16:28:55.113867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.975 [2024-11-20 16:28:55.113872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.975 [2024-11-20 16:28:55.113887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-11-20 16:28:55.123862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.975 [2024-11-20 16:28:55.123919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.975 [2024-11-20 16:28:55.123931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.975 [2024-11-20 16:28:55.123938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.975 [2024-11-20 16:28:55.123943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.975 [2024-11-20 16:28:55.123957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-11-20 16:28:55.133839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.975 [2024-11-20 16:28:55.133898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.975 [2024-11-20 16:28:55.133910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.975 [2024-11-20 16:28:55.133916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.975 [2024-11-20 16:28:55.133922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.975 [2024-11-20 16:28:55.133936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-11-20 16:28:55.143913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.975 [2024-11-20 16:28:55.143980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.975 [2024-11-20 16:28:55.143992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.975 [2024-11-20 16:28:55.143999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.975 [2024-11-20 16:28:55.144004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.975 [2024-11-20 16:28:55.144019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-11-20 16:28:55.153895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.975 [2024-11-20 16:28:55.153948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.975 [2024-11-20 16:28:55.153961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.975 [2024-11-20 16:28:55.153968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.975 [2024-11-20 16:28:55.153974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.975 [2024-11-20 16:28:55.153988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-11-20 16:28:55.163918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.975 [2024-11-20 16:28:55.163973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.975 [2024-11-20 16:28:55.163985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.975 [2024-11-20 16:28:55.163992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.975 [2024-11-20 16:28:55.163998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.975 [2024-11-20 16:28:55.164012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-11-20 16:28:55.173943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.975 [2024-11-20 16:28:55.173996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.975 [2024-11-20 16:28:55.174009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.975 [2024-11-20 16:28:55.174015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.975 [2024-11-20 16:28:55.174021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.975 [2024-11-20 16:28:55.174035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-11-20 16:28:55.183988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.975 [2024-11-20 16:28:55.184093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.975 [2024-11-20 16:28:55.184107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.975 [2024-11-20 16:28:55.184118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.975 [2024-11-20 16:28:55.184124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.975 [2024-11-20 16:28:55.184139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-11-20 16:28:55.194014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.975 [2024-11-20 16:28:55.194087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.975 [2024-11-20 16:28:55.194100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.975 [2024-11-20 16:28:55.194106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.975 [2024-11-20 16:28:55.194112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:23.975 [2024-11-20 16:28:55.194126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.975 qpair failed and we were unable to recover it. 00:27:24.235 [2024-11-20 16:28:55.203982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.235 [2024-11-20 16:28:55.204064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.235 [2024-11-20 16:28:55.204077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.235 [2024-11-20 16:28:55.204083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.235 [2024-11-20 16:28:55.204089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.235 [2024-11-20 16:28:55.204103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.235 qpair failed and we were unable to recover it. 00:27:24.235 [2024-11-20 16:28:55.214076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.235 [2024-11-20 16:28:55.214155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.235 [2024-11-20 16:28:55.214168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.235 [2024-11-20 16:28:55.214174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.235 [2024-11-20 16:28:55.214180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.235 [2024-11-20 16:28:55.214194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.235 qpair failed and we were unable to recover it. 00:27:24.235 [2024-11-20 16:28:55.224032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.235 [2024-11-20 16:28:55.224084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.235 [2024-11-20 16:28:55.224097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.235 [2024-11-20 16:28:55.224103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.235 [2024-11-20 16:28:55.224109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.235 [2024-11-20 16:28:55.224129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.235 qpair failed and we were unable to recover it. 00:27:24.235 [2024-11-20 16:28:55.234142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.235 [2024-11-20 16:28:55.234193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.235 [2024-11-20 16:28:55.234210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.235 [2024-11-20 16:28:55.234216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.235 [2024-11-20 16:28:55.234222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.235 [2024-11-20 16:28:55.234236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.235 qpair failed and we were unable to recover it. 00:27:24.235 [2024-11-20 16:28:55.244154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.235 [2024-11-20 16:28:55.244208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.235 [2024-11-20 16:28:55.244222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.235 [2024-11-20 16:28:55.244228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.235 [2024-11-20 16:28:55.244234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.235 [2024-11-20 16:28:55.244249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.235 qpair failed and we were unable to recover it. 00:27:24.235 [2024-11-20 16:28:55.254178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.235 [2024-11-20 16:28:55.254258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.235 [2024-11-20 16:28:55.254271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.235 [2024-11-20 16:28:55.254278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.235 [2024-11-20 16:28:55.254283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.235 [2024-11-20 16:28:55.254298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.235 qpair failed and we were unable to recover it. 00:27:24.235 [2024-11-20 16:28:55.264244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.235 [2024-11-20 16:28:55.264346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.235 [2024-11-20 16:28:55.264359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.235 [2024-11-20 16:28:55.264366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.235 [2024-11-20 16:28:55.264372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.235 [2024-11-20 16:28:55.264388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.235 qpair failed and we were unable to recover it. 00:27:24.235 [2024-11-20 16:28:55.274227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.235 [2024-11-20 16:28:55.274281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.235 [2024-11-20 16:28:55.274295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.235 [2024-11-20 16:28:55.274301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.235 [2024-11-20 16:28:55.274307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.235 [2024-11-20 16:28:55.274322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.235 qpair failed and we were unable to recover it. 00:27:24.235 [2024-11-20 16:28:55.284258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.235 [2024-11-20 16:28:55.284318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.235 [2024-11-20 16:28:55.284331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.235 [2024-11-20 16:28:55.284338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.235 [2024-11-20 16:28:55.284343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.235 [2024-11-20 16:28:55.284358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.235 qpair failed and we were unable to recover it. 00:27:24.235 [2024-11-20 16:28:55.294319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.235 [2024-11-20 16:28:55.294388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.235 [2024-11-20 16:28:55.294402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.235 [2024-11-20 16:28:55.294408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.235 [2024-11-20 16:28:55.294414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.235 [2024-11-20 16:28:55.294429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.235 qpair failed and we were unable to recover it. 00:27:24.235 [2024-11-20 16:28:55.304345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.235 [2024-11-20 16:28:55.304402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.235 [2024-11-20 16:28:55.304415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.235 [2024-11-20 16:28:55.304421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.235 [2024-11-20 16:28:55.304427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.235 [2024-11-20 16:28:55.304442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.235 qpair failed and we were unable to recover it. 00:27:24.236 [2024-11-20 16:28:55.314369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.236 [2024-11-20 16:28:55.314435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.236 [2024-11-20 16:28:55.314448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.236 [2024-11-20 16:28:55.314458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.236 [2024-11-20 16:28:55.314463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.236 [2024-11-20 16:28:55.314478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.236 qpair failed and we were unable to recover it. 00:27:24.236 [2024-11-20 16:28:55.324376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.236 [2024-11-20 16:28:55.324430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.236 [2024-11-20 16:28:55.324443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.236 [2024-11-20 16:28:55.324449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.236 [2024-11-20 16:28:55.324455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.236 [2024-11-20 16:28:55.324469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.236 qpair failed and we were unable to recover it. 00:27:24.236 [2024-11-20 16:28:55.334421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.236 [2024-11-20 16:28:55.334476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.236 [2024-11-20 16:28:55.334489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.236 [2024-11-20 16:28:55.334496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.236 [2024-11-20 16:28:55.334501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.236 [2024-11-20 16:28:55.334516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.236 qpair failed and we were unable to recover it. 00:27:24.236 [2024-11-20 16:28:55.344445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.236 [2024-11-20 16:28:55.344514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.236 [2024-11-20 16:28:55.344527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.236 [2024-11-20 16:28:55.344533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.236 [2024-11-20 16:28:55.344539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.236 [2024-11-20 16:28:55.344553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.236 qpair failed and we were unable to recover it. 00:27:24.236 [2024-11-20 16:28:55.354464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.236 [2024-11-20 16:28:55.354514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.236 [2024-11-20 16:28:55.354526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.236 [2024-11-20 16:28:55.354532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.236 [2024-11-20 16:28:55.354539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.236 [2024-11-20 16:28:55.354557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.236 qpair failed and we were unable to recover it. 00:27:24.236 [2024-11-20 16:28:55.364504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.236 [2024-11-20 16:28:55.364557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.236 [2024-11-20 16:28:55.364570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.236 [2024-11-20 16:28:55.364576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.236 [2024-11-20 16:28:55.364582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.236 [2024-11-20 16:28:55.364597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.236 qpair failed and we were unable to recover it. 00:27:24.236 [2024-11-20 16:28:55.374460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.236 [2024-11-20 16:28:55.374546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.236 [2024-11-20 16:28:55.374559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.236 [2024-11-20 16:28:55.374565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.236 [2024-11-20 16:28:55.374571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.236 [2024-11-20 16:28:55.374585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.236 qpair failed and we were unable to recover it. 00:27:24.236 [2024-11-20 16:28:55.384597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.236 [2024-11-20 16:28:55.384698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.236 [2024-11-20 16:28:55.384710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.236 [2024-11-20 16:28:55.384716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.236 [2024-11-20 16:28:55.384722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.236 [2024-11-20 16:28:55.384737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.236 qpair failed and we were unable to recover it. 00:27:24.236 [2024-11-20 16:28:55.394582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.236 [2024-11-20 16:28:55.394630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.236 [2024-11-20 16:28:55.394642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.236 [2024-11-20 16:28:55.394649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.236 [2024-11-20 16:28:55.394655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.236 [2024-11-20 16:28:55.394669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.236 qpair failed and we were unable to recover it. 00:27:24.236 [2024-11-20 16:28:55.404612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.236 [2024-11-20 16:28:55.404666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.236 [2024-11-20 16:28:55.404678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.236 [2024-11-20 16:28:55.404685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.236 [2024-11-20 16:28:55.404690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.236 [2024-11-20 16:28:55.404705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.236 qpair failed and we were unable to recover it. 00:27:24.236 [2024-11-20 16:28:55.414660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.236 [2024-11-20 16:28:55.414718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.236 [2024-11-20 16:28:55.414731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.236 [2024-11-20 16:28:55.414737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.236 [2024-11-20 16:28:55.414742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.236 [2024-11-20 16:28:55.414757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.236 qpair failed and we were unable to recover it. 00:27:24.236 [2024-11-20 16:28:55.424702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.236 [2024-11-20 16:28:55.424757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.236 [2024-11-20 16:28:55.424769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.236 [2024-11-20 16:28:55.424776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.236 [2024-11-20 16:28:55.424781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.236 [2024-11-20 16:28:55.424796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.236 qpair failed and we were unable to recover it. 00:27:24.236 [2024-11-20 16:28:55.434634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.236 [2024-11-20 16:28:55.434686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.236 [2024-11-20 16:28:55.434698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.236 [2024-11-20 16:28:55.434704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.236 [2024-11-20 16:28:55.434710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.236 [2024-11-20 16:28:55.434725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.236 qpair failed and we were unable to recover it. 00:27:24.237 [2024-11-20 16:28:55.444726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.237 [2024-11-20 16:28:55.444777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.237 [2024-11-20 16:28:55.444793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.237 [2024-11-20 16:28:55.444800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.237 [2024-11-20 16:28:55.444805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.237 [2024-11-20 16:28:55.444820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.237 qpair failed and we were unable to recover it. 00:27:24.237 [2024-11-20 16:28:55.454767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.237 [2024-11-20 16:28:55.454820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.237 [2024-11-20 16:28:55.454833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.237 [2024-11-20 16:28:55.454839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.237 [2024-11-20 16:28:55.454845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.237 [2024-11-20 16:28:55.454860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.237 qpair failed and we were unable to recover it. 00:27:24.497 [2024-11-20 16:28:55.464800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.497 [2024-11-20 16:28:55.464897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.497 [2024-11-20 16:28:55.464909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.497 [2024-11-20 16:28:55.464916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.497 [2024-11-20 16:28:55.464921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.497 [2024-11-20 16:28:55.464936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.497 qpair failed and we were unable to recover it. 00:27:24.497 [2024-11-20 16:28:55.474847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.497 [2024-11-20 16:28:55.474904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.497 [2024-11-20 16:28:55.474918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.497 [2024-11-20 16:28:55.474924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.497 [2024-11-20 16:28:55.474930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.497 [2024-11-20 16:28:55.474944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.497 qpair failed and we were unable to recover it. 00:27:24.497 [2024-11-20 16:28:55.484836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.497 [2024-11-20 16:28:55.484915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.497 [2024-11-20 16:28:55.484928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.497 [2024-11-20 16:28:55.484935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.497 [2024-11-20 16:28:55.484944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.497 [2024-11-20 16:28:55.484959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.497 qpair failed and we were unable to recover it. 00:27:24.497 [2024-11-20 16:28:55.494840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.497 [2024-11-20 16:28:55.494912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.497 [2024-11-20 16:28:55.494925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.497 [2024-11-20 16:28:55.494932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.497 [2024-11-20 16:28:55.494937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.497 [2024-11-20 16:28:55.494952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.497 qpair failed and we were unable to recover it. 00:27:24.497 [2024-11-20 16:28:55.504960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.497 [2024-11-20 16:28:55.505017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.497 [2024-11-20 16:28:55.505030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.497 [2024-11-20 16:28:55.505036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.497 [2024-11-20 16:28:55.505042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.497 [2024-11-20 16:28:55.505057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.497 qpair failed and we were unable to recover it. 00:27:24.497 [2024-11-20 16:28:55.514958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.497 [2024-11-20 16:28:55.515013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.497 [2024-11-20 16:28:55.515025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.497 [2024-11-20 16:28:55.515031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.497 [2024-11-20 16:28:55.515037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.497 [2024-11-20 16:28:55.515052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.497 qpair failed and we were unable to recover it. 00:27:24.497 [2024-11-20 16:28:55.524991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.497 [2024-11-20 16:28:55.525042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.498 [2024-11-20 16:28:55.525055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.498 [2024-11-20 16:28:55.525061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.498 [2024-11-20 16:28:55.525067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.498 [2024-11-20 16:28:55.525082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.498 qpair failed and we were unable to recover it. 00:27:24.498 [2024-11-20 16:28:55.535007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.498 [2024-11-20 16:28:55.535065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.498 [2024-11-20 16:28:55.535078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.498 [2024-11-20 16:28:55.535084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.498 [2024-11-20 16:28:55.535090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.498 [2024-11-20 16:28:55.535105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.498 qpair failed and we were unable to recover it. 00:27:24.498 [2024-11-20 16:28:55.545071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.498 [2024-11-20 16:28:55.545147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.498 [2024-11-20 16:28:55.545160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.498 [2024-11-20 16:28:55.545166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.498 [2024-11-20 16:28:55.545172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.498 [2024-11-20 16:28:55.545188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.498 qpair failed and we were unable to recover it. 00:27:24.498 [2024-11-20 16:28:55.555068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.498 [2024-11-20 16:28:55.555135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.498 [2024-11-20 16:28:55.555147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.498 [2024-11-20 16:28:55.555153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.498 [2024-11-20 16:28:55.555159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.498 [2024-11-20 16:28:55.555174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.498 qpair failed and we were unable to recover it. 00:27:24.498 [2024-11-20 16:28:55.565066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.498 [2024-11-20 16:28:55.565115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.498 [2024-11-20 16:28:55.565127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.498 [2024-11-20 16:28:55.565134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.498 [2024-11-20 16:28:55.565139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.498 [2024-11-20 16:28:55.565154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.498 qpair failed and we were unable to recover it. 00:27:24.498 [2024-11-20 16:28:55.575104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.498 [2024-11-20 16:28:55.575165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.498 [2024-11-20 16:28:55.575182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.498 [2024-11-20 16:28:55.575188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.498 [2024-11-20 16:28:55.575194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.498 [2024-11-20 16:28:55.575212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.498 qpair failed and we were unable to recover it. 00:27:24.498 [2024-11-20 16:28:55.585087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.498 [2024-11-20 16:28:55.585139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.498 [2024-11-20 16:28:55.585152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.498 [2024-11-20 16:28:55.585158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.498 [2024-11-20 16:28:55.585164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.498 [2024-11-20 16:28:55.585178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.498 qpair failed and we were unable to recover it. 00:27:24.498 [2024-11-20 16:28:55.595154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.498 [2024-11-20 16:28:55.595213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.498 [2024-11-20 16:28:55.595226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.498 [2024-11-20 16:28:55.595233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.498 [2024-11-20 16:28:55.595239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.498 [2024-11-20 16:28:55.595253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.498 qpair failed and we were unable to recover it. 00:27:24.498 [2024-11-20 16:28:55.605179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.498 [2024-11-20 16:28:55.605237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.498 [2024-11-20 16:28:55.605249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.498 [2024-11-20 16:28:55.605256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.498 [2024-11-20 16:28:55.605262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.498 [2024-11-20 16:28:55.605277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.498 qpair failed and we were unable to recover it. 00:27:24.498 [2024-11-20 16:28:55.615247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.498 [2024-11-20 16:28:55.615301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.498 [2024-11-20 16:28:55.615314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.498 [2024-11-20 16:28:55.615320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.498 [2024-11-20 16:28:55.615329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.498 [2024-11-20 16:28:55.615344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.498 qpair failed and we were unable to recover it. 00:27:24.498 [2024-11-20 16:28:55.625250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.498 [2024-11-20 16:28:55.625306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.498 [2024-11-20 16:28:55.625319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.498 [2024-11-20 16:28:55.625326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.498 [2024-11-20 16:28:55.625332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.498 [2024-11-20 16:28:55.625347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.498 qpair failed and we were unable to recover it. 00:27:24.498 [2024-11-20 16:28:55.635263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.498 [2024-11-20 16:28:55.635358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.498 [2024-11-20 16:28:55.635370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.498 [2024-11-20 16:28:55.635376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.498 [2024-11-20 16:28:55.635382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.498 [2024-11-20 16:28:55.635398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.498 qpair failed and we were unable to recover it. 00:27:24.498 [2024-11-20 16:28:55.645377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.498 [2024-11-20 16:28:55.645457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.498 [2024-11-20 16:28:55.645470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.498 [2024-11-20 16:28:55.645476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.498 [2024-11-20 16:28:55.645482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.498 [2024-11-20 16:28:55.645497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.498 qpair failed and we were unable to recover it. 00:27:24.498 [2024-11-20 16:28:55.655359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.499 [2024-11-20 16:28:55.655416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.499 [2024-11-20 16:28:55.655429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.499 [2024-11-20 16:28:55.655436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.499 [2024-11-20 16:28:55.655442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.499 [2024-11-20 16:28:55.655456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.499 qpair failed and we were unable to recover it. 00:27:24.499 [2024-11-20 16:28:55.665425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.499 [2024-11-20 16:28:55.665483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.499 [2024-11-20 16:28:55.665496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.499 [2024-11-20 16:28:55.665502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.499 [2024-11-20 16:28:55.665508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.499 [2024-11-20 16:28:55.665523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.499 qpair failed and we were unable to recover it. 00:27:24.499 [2024-11-20 16:28:55.675400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.499 [2024-11-20 16:28:55.675453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.499 [2024-11-20 16:28:55.675465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.499 [2024-11-20 16:28:55.675472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.499 [2024-11-20 16:28:55.675477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.499 [2024-11-20 16:28:55.675491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.499 qpair failed and we were unable to recover it. 00:27:24.499 [2024-11-20 16:28:55.685418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.499 [2024-11-20 16:28:55.685475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.499 [2024-11-20 16:28:55.685488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.499 [2024-11-20 16:28:55.685495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.499 [2024-11-20 16:28:55.685501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.499 [2024-11-20 16:28:55.685516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.499 qpair failed and we were unable to recover it. 00:27:24.499 [2024-11-20 16:28:55.695492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.499 [2024-11-20 16:28:55.695547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.499 [2024-11-20 16:28:55.695560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.499 [2024-11-20 16:28:55.695567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.499 [2024-11-20 16:28:55.695572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.499 [2024-11-20 16:28:55.695587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.499 qpair failed and we were unable to recover it. 00:27:24.499 [2024-11-20 16:28:55.705496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.499 [2024-11-20 16:28:55.705554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.499 [2024-11-20 16:28:55.705567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.499 [2024-11-20 16:28:55.705573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.499 [2024-11-20 16:28:55.705579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.499 [2024-11-20 16:28:55.705593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.499 qpair failed and we were unable to recover it. 00:27:24.499 [2024-11-20 16:28:55.715456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.499 [2024-11-20 16:28:55.715508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.499 [2024-11-20 16:28:55.715520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.499 [2024-11-20 16:28:55.715526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.499 [2024-11-20 16:28:55.715532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.499 [2024-11-20 16:28:55.715546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.499 qpair failed and we were unable to recover it. 00:27:24.499 [2024-11-20 16:28:55.725552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.499 [2024-11-20 16:28:55.725609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.499 [2024-11-20 16:28:55.725627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.499 [2024-11-20 16:28:55.725634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.499 [2024-11-20 16:28:55.725640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.499 [2024-11-20 16:28:55.725658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.499 qpair failed and we were unable to recover it. 00:27:24.759 [2024-11-20 16:28:55.735560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.759 [2024-11-20 16:28:55.735616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.759 [2024-11-20 16:28:55.735633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.759 [2024-11-20 16:28:55.735641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.759 [2024-11-20 16:28:55.735647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.759 [2024-11-20 16:28:55.735664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.759 qpair failed and we were unable to recover it. 00:27:24.759 [2024-11-20 16:28:55.745526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.759 [2024-11-20 16:28:55.745583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.759 [2024-11-20 16:28:55.745596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.759 [2024-11-20 16:28:55.745607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.759 [2024-11-20 16:28:55.745612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.759 [2024-11-20 16:28:55.745628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.759 qpair failed and we were unable to recover it. 00:27:24.759 [2024-11-20 16:28:55.755637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.759 [2024-11-20 16:28:55.755689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.759 [2024-11-20 16:28:55.755703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.759 [2024-11-20 16:28:55.755709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.759 [2024-11-20 16:28:55.755715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.759 [2024-11-20 16:28:55.755730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.759 qpair failed and we were unable to recover it. 00:27:24.759 [2024-11-20 16:28:55.765667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.759 [2024-11-20 16:28:55.765718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.759 [2024-11-20 16:28:55.765731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.759 [2024-11-20 16:28:55.765737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.759 [2024-11-20 16:28:55.765743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.759 [2024-11-20 16:28:55.765758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.759 qpair failed and we were unable to recover it. 00:27:24.759 [2024-11-20 16:28:55.775632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.759 [2024-11-20 16:28:55.775690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.759 [2024-11-20 16:28:55.775704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.759 [2024-11-20 16:28:55.775710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.759 [2024-11-20 16:28:55.775716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.759 [2024-11-20 16:28:55.775731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.759 qpair failed and we were unable to recover it. 00:27:24.759 [2024-11-20 16:28:55.785733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.759 [2024-11-20 16:28:55.785801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.759 [2024-11-20 16:28:55.785814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.759 [2024-11-20 16:28:55.785820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.759 [2024-11-20 16:28:55.785826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.759 [2024-11-20 16:28:55.785845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.759 qpair failed and we were unable to recover it. 00:27:24.759 [2024-11-20 16:28:55.795709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.759 [2024-11-20 16:28:55.795771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.759 [2024-11-20 16:28:55.795785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.759 [2024-11-20 16:28:55.795791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.759 [2024-11-20 16:28:55.795796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.759 [2024-11-20 16:28:55.795812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.759 qpair failed and we were unable to recover it. 00:27:24.759 [2024-11-20 16:28:55.805735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.759 [2024-11-20 16:28:55.805790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.759 [2024-11-20 16:28:55.805802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.759 [2024-11-20 16:28:55.805809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.759 [2024-11-20 16:28:55.805814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.759 [2024-11-20 16:28:55.805829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.759 qpair failed and we were unable to recover it. 00:27:24.759 [2024-11-20 16:28:55.815786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.759 [2024-11-20 16:28:55.815842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.759 [2024-11-20 16:28:55.815854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.759 [2024-11-20 16:28:55.815861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.759 [2024-11-20 16:28:55.815866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.759 [2024-11-20 16:28:55.815880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.759 qpair failed and we were unable to recover it. 00:27:24.759 [2024-11-20 16:28:55.825811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.759 [2024-11-20 16:28:55.825863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.760 [2024-11-20 16:28:55.825876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.760 [2024-11-20 16:28:55.825882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.760 [2024-11-20 16:28:55.825888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.760 [2024-11-20 16:28:55.825903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.760 qpair failed and we were unable to recover it. 00:27:24.760 [2024-11-20 16:28:55.835887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.760 [2024-11-20 16:28:55.835942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.760 [2024-11-20 16:28:55.835955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.760 [2024-11-20 16:28:55.835961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.760 [2024-11-20 16:28:55.835967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.760 [2024-11-20 16:28:55.835982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.760 qpair failed and we were unable to recover it. 00:27:24.760 [2024-11-20 16:28:55.845851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.760 [2024-11-20 16:28:55.845905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.760 [2024-11-20 16:28:55.845918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.760 [2024-11-20 16:28:55.845925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.760 [2024-11-20 16:28:55.845931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.760 [2024-11-20 16:28:55.845945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.760 qpair failed and we were unable to recover it. 00:27:24.760 [2024-11-20 16:28:55.855933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.760 [2024-11-20 16:28:55.855988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.760 [2024-11-20 16:28:55.856001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.760 [2024-11-20 16:28:55.856008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.760 [2024-11-20 16:28:55.856014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.760 [2024-11-20 16:28:55.856028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.760 qpair failed and we were unable to recover it. 00:27:24.760 [2024-11-20 16:28:55.865850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.760 [2024-11-20 16:28:55.865905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.760 [2024-11-20 16:28:55.865917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.760 [2024-11-20 16:28:55.865924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.760 [2024-11-20 16:28:55.865929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.760 [2024-11-20 16:28:55.865944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.760 qpair failed and we were unable to recover it. 00:27:24.760 [2024-11-20 16:28:55.875954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.760 [2024-11-20 16:28:55.876048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.760 [2024-11-20 16:28:55.876064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.760 [2024-11-20 16:28:55.876070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.760 [2024-11-20 16:28:55.876076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.760 [2024-11-20 16:28:55.876091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.760 qpair failed and we were unable to recover it. 00:27:24.760 [2024-11-20 16:28:55.885952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.760 [2024-11-20 16:28:55.886006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.760 [2024-11-20 16:28:55.886019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.760 [2024-11-20 16:28:55.886025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.760 [2024-11-20 16:28:55.886031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.760 [2024-11-20 16:28:55.886045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.760 qpair failed and we were unable to recover it. 00:27:24.760 [2024-11-20 16:28:55.896013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.760 [2024-11-20 16:28:55.896071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.760 [2024-11-20 16:28:55.896084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.760 [2024-11-20 16:28:55.896091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.760 [2024-11-20 16:28:55.896097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.760 [2024-11-20 16:28:55.896111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.760 qpair failed and we were unable to recover it. 00:27:24.760 [2024-11-20 16:28:55.906023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.760 [2024-11-20 16:28:55.906097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.760 [2024-11-20 16:28:55.906109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.760 [2024-11-20 16:28:55.906116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.760 [2024-11-20 16:28:55.906121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.760 [2024-11-20 16:28:55.906136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.760 qpair failed and we were unable to recover it. 00:27:24.760 [2024-11-20 16:28:55.916055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.760 [2024-11-20 16:28:55.916109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.760 [2024-11-20 16:28:55.916122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.760 [2024-11-20 16:28:55.916129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.760 [2024-11-20 16:28:55.916135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.760 [2024-11-20 16:28:55.916152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.760 qpair failed and we were unable to recover it. 00:27:24.760 [2024-11-20 16:28:55.926148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.760 [2024-11-20 16:28:55.926209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.760 [2024-11-20 16:28:55.926222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.760 [2024-11-20 16:28:55.926229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.760 [2024-11-20 16:28:55.926235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.760 [2024-11-20 16:28:55.926249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.760 qpair failed and we were unable to recover it. 00:27:24.760 [2024-11-20 16:28:55.936124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.760 [2024-11-20 16:28:55.936232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.760 [2024-11-20 16:28:55.936245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.760 [2024-11-20 16:28:55.936252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.760 [2024-11-20 16:28:55.936258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec9c000b90 00:27:24.760 [2024-11-20 16:28:55.936273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.760 qpair failed and we were unable to recover it. 00:27:24.760 [2024-11-20 16:28:55.946154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.760 [2024-11-20 16:28:55.946255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.760 [2024-11-20 16:28:55.946315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.760 [2024-11-20 16:28:55.946342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.760 [2024-11-20 16:28:55.946364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7feca4000b90 00:27:24.760 [2024-11-20 16:28:55.946416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.760 qpair failed and we were unable to recover it. 00:27:24.760 [2024-11-20 16:28:55.956176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.760 [2024-11-20 16:28:55.956261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.761 [2024-11-20 16:28:55.956291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.761 [2024-11-20 16:28:55.956306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.761 [2024-11-20 16:28:55.956320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7feca4000b90 00:27:24.761 [2024-11-20 16:28:55.956351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.761 qpair failed and we were unable to recover it. 00:27:24.761 [2024-11-20 16:28:55.966243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.761 [2024-11-20 16:28:55.966346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.761 [2024-11-20 16:28:55.966403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.761 [2024-11-20 16:28:55.966429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.761 [2024-11-20 16:28:55.966452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec98000b90 00:27:24.761 [2024-11-20 16:28:55.966503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:24.761 qpair failed and we were unable to recover it. 00:27:24.761 [2024-11-20 16:28:55.976248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.761 [2024-11-20 16:28:55.976319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.761 [2024-11-20 16:28:55.976347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.761 [2024-11-20 16:28:55.976361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.761 [2024-11-20 16:28:55.976375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fec98000b90 00:27:24.761 [2024-11-20 16:28:55.976406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:24.761 qpair failed and we were unable to recover it. 00:27:24.761 [2024-11-20 16:28:55.986365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.761 [2024-11-20 16:28:55.986486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.761 [2024-11-20 16:28:55.986544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.761 [2024-11-20 16:28:55.986570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.761 [2024-11-20 16:28:55.986592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1481ba0 00:27:24.761 [2024-11-20 16:28:55.986644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.761 qpair failed and we were unable to recover it. 00:27:25.020 [2024-11-20 16:28:55.996316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.020 [2024-11-20 16:28:55.996401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.020 [2024-11-20 16:28:55.996435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.020 [2024-11-20 16:28:55.996452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.020 [2024-11-20 16:28:55.996465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1481ba0 00:27:25.020 [2024-11-20 16:28:55.996499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.020 qpair failed and we were unable to recover it. 00:27:25.020 [2024-11-20 16:28:55.996609] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:25.020 A controller has encountered a failure and is being reset. 00:27:25.020 Controller properly reset. 00:27:25.020 Initializing NVMe Controllers 00:27:25.020 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:25.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:25.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:25.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:25.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:25.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:25.020 Initialization complete. Launching workers. 00:27:25.020 Starting thread on core 1 00:27:25.020 Starting thread on core 2 00:27:25.020 Starting thread on core 3 00:27:25.020 Starting thread on core 0 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:25.020 00:27:25.020 real 0m10.741s 00:27:25.020 user 0m19.259s 00:27:25.020 sys 0m4.745s 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.020 ************************************ 00:27:25.020 END TEST nvmf_target_disconnect_tc2 00:27:25.020 ************************************ 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:25.020 rmmod nvme_tcp 00:27:25.020 rmmod nvme_fabrics 00:27:25.020 rmmod nvme_keyring 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2081052 ']' 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2081052 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2081052 ']' 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2081052 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2081052 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2081052' 00:27:25.020 killing process with pid 2081052 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2081052 00:27:25.020 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2081052 00:27:25.279 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:25.279 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:25.279 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:25.279 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:25.279 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:25.279 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:25.279 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:25.279 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.279 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:25.279 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.279 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.279 16:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.814 16:28:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:27.814 00:27:27.814 real 0m19.527s 00:27:27.814 user 0m46.779s 00:27:27.814 sys 0m9.718s 00:27:27.814 16:28:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:27.814 16:28:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:27.814 ************************************ 00:27:27.814 END TEST nvmf_target_disconnect 00:27:27.814 ************************************ 00:27:27.814 16:28:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:27.814 00:27:27.814 real 5m54.287s 00:27:27.815 user 10m36.514s 00:27:27.815 sys 1m58.542s 00:27:27.815 16:28:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:27.815 16:28:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.815 ************************************ 00:27:27.815 END TEST nvmf_host 00:27:27.815 ************************************ 00:27:27.815 16:28:58 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:27.815 16:28:58 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:27.815 16:28:58 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:27.815 16:28:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:27.815 16:28:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:27.815 16:28:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:27.815 ************************************ 00:27:27.815 START TEST nvmf_target_core_interrupt_mode 00:27:27.815 ************************************ 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:27.815 * Looking for test storage... 00:27:27.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:27.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.815 --rc genhtml_branch_coverage=1 00:27:27.815 --rc genhtml_function_coverage=1 00:27:27.815 --rc genhtml_legend=1 00:27:27.815 --rc geninfo_all_blocks=1 00:27:27.815 --rc geninfo_unexecuted_blocks=1 00:27:27.815 00:27:27.815 ' 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:27.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.815 --rc genhtml_branch_coverage=1 00:27:27.815 --rc genhtml_function_coverage=1 00:27:27.815 --rc genhtml_legend=1 00:27:27.815 --rc geninfo_all_blocks=1 00:27:27.815 --rc geninfo_unexecuted_blocks=1 00:27:27.815 00:27:27.815 ' 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:27.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.815 --rc genhtml_branch_coverage=1 00:27:27.815 --rc genhtml_function_coverage=1 00:27:27.815 --rc genhtml_legend=1 00:27:27.815 --rc geninfo_all_blocks=1 00:27:27.815 --rc geninfo_unexecuted_blocks=1 00:27:27.815 00:27:27.815 ' 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:27.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.815 --rc genhtml_branch_coverage=1 00:27:27.815 --rc genhtml_function_coverage=1 00:27:27.815 --rc genhtml_legend=1 00:27:27.815 --rc geninfo_all_blocks=1 00:27:27.815 --rc geninfo_unexecuted_blocks=1 00:27:27.815 00:27:27.815 ' 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.815 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:27.816 ************************************ 00:27:27.816 START TEST nvmf_abort 00:27:27.816 ************************************ 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:27.816 * Looking for test storage... 00:27:27.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:27.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.816 --rc genhtml_branch_coverage=1 00:27:27.816 --rc genhtml_function_coverage=1 00:27:27.816 --rc genhtml_legend=1 00:27:27.816 --rc geninfo_all_blocks=1 00:27:27.816 --rc geninfo_unexecuted_blocks=1 00:27:27.816 00:27:27.816 ' 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:27.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.816 --rc genhtml_branch_coverage=1 00:27:27.816 --rc genhtml_function_coverage=1 00:27:27.816 --rc genhtml_legend=1 00:27:27.816 --rc geninfo_all_blocks=1 00:27:27.816 --rc geninfo_unexecuted_blocks=1 00:27:27.816 00:27:27.816 ' 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:27.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.816 --rc genhtml_branch_coverage=1 00:27:27.816 --rc genhtml_function_coverage=1 00:27:27.816 --rc genhtml_legend=1 00:27:27.816 --rc geninfo_all_blocks=1 00:27:27.816 --rc geninfo_unexecuted_blocks=1 00:27:27.816 00:27:27.816 ' 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:27.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.816 --rc genhtml_branch_coverage=1 00:27:27.816 --rc genhtml_function_coverage=1 00:27:27.816 --rc genhtml_legend=1 00:27:27.816 --rc geninfo_all_blocks=1 00:27:27.816 --rc geninfo_unexecuted_blocks=1 00:27:27.816 00:27:27.816 ' 00:27:27.816 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.817 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:27.817 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.817 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.817 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.817 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.817 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.817 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.817 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.817 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.817 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.817 16:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:27.817 16:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.387 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:34.388 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:34.388 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:34.388 Found net devices under 0000:86:00.0: cvl_0_0 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:34.388 Found net devices under 0000:86:00.1: cvl_0_1 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:34.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:27:34.388 00:27:34.388 --- 10.0.0.2 ping statistics --- 00:27:34.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.388 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:27:34.388 00:27:34.388 --- 10.0.0.1 ping statistics --- 00:27:34.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.388 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:34.388 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:34.389 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.389 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:34.389 16:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2085803 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2085803 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2085803 ']' 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.389 [2024-11-20 16:29:05.063465] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:34.389 [2024-11-20 16:29:05.064354] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:27:34.389 [2024-11-20 16:29:05.064389] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.389 [2024-11-20 16:29:05.139631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:34.389 [2024-11-20 16:29:05.180378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.389 [2024-11-20 16:29:05.180414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.389 [2024-11-20 16:29:05.180421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.389 [2024-11-20 16:29:05.180426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.389 [2024-11-20 16:29:05.180431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.389 [2024-11-20 16:29:05.181838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.389 [2024-11-20 16:29:05.181943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.389 [2024-11-20 16:29:05.181944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.389 [2024-11-20 16:29:05.248722] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:34.389 [2024-11-20 16:29:05.249506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:34.389 [2024-11-20 16:29:05.249785] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:34.389 [2024-11-20 16:29:05.249938] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.389 [2024-11-20 16:29:05.314724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.389 Malloc0 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.389 Delay0 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.389 [2024-11-20 16:29:05.402623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.389 16:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:34.389 [2024-11-20 16:29:05.491043] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:36.915 Initializing NVMe Controllers 00:27:36.915 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:36.915 controller IO queue size 128 less than required 00:27:36.915 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:36.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:36.915 Initialization complete. Launching workers. 00:27:36.915 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37752 00:27:36.915 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37809, failed to submit 66 00:27:36.915 success 37752, unsuccessful 57, failed 0 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:36.915 rmmod nvme_tcp 00:27:36.915 rmmod nvme_fabrics 00:27:36.915 rmmod nvme_keyring 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2085803 ']' 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2085803 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2085803 ']' 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2085803 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2085803 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2085803' 00:27:36.915 killing process with pid 2085803 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2085803 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2085803 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.915 16:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.821 16:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:38.821 00:27:38.821 real 0m11.114s 00:27:38.821 user 0m10.107s 00:27:38.821 sys 0m5.763s 00:27:38.821 16:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.821 16:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.821 ************************************ 00:27:38.821 END TEST nvmf_abort 00:27:38.821 ************************************ 00:27:38.821 16:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:38.821 16:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:38.821 16:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.821 16:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:38.821 ************************************ 00:27:38.821 START TEST nvmf_ns_hotplug_stress 00:27:38.821 ************************************ 00:27:38.821 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:39.080 * Looking for test storage... 00:27:39.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:39.080 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:39.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.081 --rc genhtml_branch_coverage=1 00:27:39.081 --rc genhtml_function_coverage=1 00:27:39.081 --rc genhtml_legend=1 00:27:39.081 --rc geninfo_all_blocks=1 00:27:39.081 --rc geninfo_unexecuted_blocks=1 00:27:39.081 00:27:39.081 ' 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:39.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.081 --rc genhtml_branch_coverage=1 00:27:39.081 --rc genhtml_function_coverage=1 00:27:39.081 --rc genhtml_legend=1 00:27:39.081 --rc geninfo_all_blocks=1 00:27:39.081 --rc geninfo_unexecuted_blocks=1 00:27:39.081 00:27:39.081 ' 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:39.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.081 --rc genhtml_branch_coverage=1 00:27:39.081 --rc genhtml_function_coverage=1 00:27:39.081 --rc genhtml_legend=1 00:27:39.081 --rc geninfo_all_blocks=1 00:27:39.081 --rc geninfo_unexecuted_blocks=1 00:27:39.081 00:27:39.081 ' 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:39.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.081 --rc genhtml_branch_coverage=1 00:27:39.081 --rc genhtml_function_coverage=1 00:27:39.081 --rc genhtml_legend=1 00:27:39.081 --rc geninfo_all_blocks=1 00:27:39.081 --rc geninfo_unexecuted_blocks=1 00:27:39.081 00:27:39.081 ' 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.081 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.082 16:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:45.652 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.652 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:45.653 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:45.653 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:45.653 Found net devices under 0000:86:00.0: cvl_0_0 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:45.653 Found net devices under 0000:86:00.1: cvl_0_1 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.653 16:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.653 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.653 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:27:45.654 00:27:45.654 --- 10.0.0.2 ping statistics --- 00:27:45.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.654 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:27:45.654 00:27:45.654 --- 10.0.0.1 ping statistics --- 00:27:45.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.654 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2089767 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2089767 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2089767 ']' 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:45.654 [2024-11-20 16:29:16.186678] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:45.654 [2024-11-20 16:29:16.187562] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:27:45.654 [2024-11-20 16:29:16.187595] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.654 [2024-11-20 16:29:16.265464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:45.654 [2024-11-20 16:29:16.306379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.654 [2024-11-20 16:29:16.306414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.654 [2024-11-20 16:29:16.306421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.654 [2024-11-20 16:29:16.306427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.654 [2024-11-20 16:29:16.306432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.654 [2024-11-20 16:29:16.307803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.654 [2024-11-20 16:29:16.307909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.654 [2024-11-20 16:29:16.307910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.654 [2024-11-20 16:29:16.375003] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:45.654 [2024-11-20 16:29:16.375788] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:45.654 [2024-11-20 16:29:16.376095] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:45.654 [2024-11-20 16:29:16.376238] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:45.654 [2024-11-20 16:29:16.608674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:45.654 16:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:45.921 [2024-11-20 16:29:16.997010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.921 16:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:46.181 16:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:46.181 Malloc0 00:27:46.439 16:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:46.439 Delay0 00:27:46.439 16:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.697 16:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:46.954 NULL1 00:27:46.954 16:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:46.954 16:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2090057 00:27:46.954 16:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:46.954 16:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:27:46.954 16:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.324 Read completed with error (sct=0, sc=11) 00:27:48.324 16:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.324 16:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:48.324 16:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:48.581 true 00:27:48.581 16:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:27:48.581 16:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.512 16:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.769 16:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:49.769 16:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:49.769 true 00:27:49.769 16:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:27:49.769 16:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.025 16:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.282 16:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:50.282 16:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:50.282 true 00:27:50.539 16:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:27:50.539 16:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:51.469 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:51.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:51.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:51.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:51.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:51.726 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:51.726 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:51.983 true 00:27:51.983 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:27:51.983 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:52.803 16:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:52.803 16:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:52.803 16:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:53.060 true 00:27:53.060 16:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:27:53.060 16:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.317 16:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.574 16:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:53.574 16:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:53.574 true 00:27:53.574 16:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:27:53.574 16:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.831 16:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.088 16:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:54.088 16:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:54.345 true 00:27:54.345 16:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:27:54.345 16:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.274 16:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.274 16:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:55.275 16:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:55.531 true 00:27:55.531 16:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:27:55.531 16:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.788 16:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.788 16:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:55.788 16:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:56.045 true 00:27:56.045 16:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:27:56.045 16:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.416 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.416 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:57.416 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:57.673 true 00:27:57.673 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:27:57.673 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.603 16:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.603 16:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:58.603 16:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:58.859 true 00:27:58.860 16:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:27:58.860 16:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.116 16:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.372 16:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:59.372 16:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:59.372 true 00:27:59.372 16:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:27:59.372 16:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.739 16:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.996 16:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:00.996 16:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:00.996 true 00:28:00.996 16:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:00.996 16:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.252 16:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.508 16:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:01.508 16:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:01.508 true 00:28:01.765 16:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:01.765 16:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.694 16:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.951 16:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:02.951 16:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:03.208 true 00:28:03.208 16:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:03.208 16:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.138 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.138 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:04.138 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:04.395 true 00:28:04.395 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:04.395 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.652 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.652 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:04.652 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:04.909 true 00:28:04.909 16:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:04.909 16:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.278 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.278 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:06.278 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:06.535 true 00:28:06.535 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:06.535 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.467 16:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.467 16:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:07.467 16:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:07.724 true 00:28:07.724 16:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:07.724 16:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.981 16:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.238 16:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:08.238 16:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:08.238 true 00:28:08.238 16:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:08.238 16:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.612 16:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.612 16:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:09.612 16:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:09.870 true 00:28:09.870 16:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:09.870 16:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.884 16:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.884 16:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:10.884 16:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:11.176 true 00:28:11.176 16:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:11.176 16:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.481 16:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.482 16:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:11.482 16:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:11.762 true 00:28:11.762 16:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:11.762 16:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.698 16:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.956 16:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:12.956 16:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:13.215 true 00:28:13.215 16:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:13.215 16:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.149 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.149 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:14.149 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:14.407 true 00:28:14.407 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:14.407 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.665 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.923 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:14.923 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:14.923 true 00:28:15.181 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:15.181 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.114 16:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.373 16:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:16.373 16:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:16.631 true 00:28:16.631 16:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:16.631 16:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.565 16:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.565 Initializing NVMe Controllers 00:28:17.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:17.565 Controller IO queue size 128, less than required. 00:28:17.565 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:17.565 Controller IO queue size 128, less than required. 00:28:17.565 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:17.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:17.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:17.565 Initialization complete. Launching workers. 00:28:17.565 ======================================================== 00:28:17.565 Latency(us) 00:28:17.565 Device Information : IOPS MiB/s Average min max 00:28:17.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2274.56 1.11 39231.33 2207.00 1012037.73 00:28:17.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18379.28 8.97 6964.10 1575.97 369281.91 00:28:17.565 ======================================================== 00:28:17.565 Total : 20653.84 10.08 10517.62 1575.97 1012037.73 00:28:17.565 00:28:17.565 16:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:17.565 16:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:17.823 true 00:28:17.823 16:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2090057 00:28:17.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2090057) - No such process 00:28:17.823 16:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2090057 00:28:17.823 16:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.823 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:18.081 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:18.081 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:18.081 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:18.081 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.081 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:18.339 null0 00:28:18.339 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:18.339 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.339 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:18.597 null1 00:28:18.597 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:18.597 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.597 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:18.597 null2 00:28:18.597 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:18.597 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.597 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:18.856 null3 00:28:18.856 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:18.856 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.856 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:19.114 null4 00:28:19.115 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:19.115 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:19.115 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:19.115 null5 00:28:19.115 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:19.115 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:19.115 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:19.373 null6 00:28:19.373 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:19.373 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:19.373 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:19.632 null7 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:19.632 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:19.633 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:19.633 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.633 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:19.633 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:19.633 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:19.633 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:19.633 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:19.633 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2095390 2095392 2095393 2095395 2095397 2095399 2095401 2095404 00:28:19.633 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:19.633 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:19.633 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.633 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.891 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.891 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.891 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.891 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.891 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.891 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:19.891 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:19.891 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:20.150 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.410 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.670 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.670 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:20.670 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:20.670 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.670 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.670 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:20.670 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.670 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.929 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.929 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.929 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.929 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.929 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.929 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.929 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:21.187 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:21.187 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.188 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:21.446 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.446 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:21.446 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.446 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:21.446 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:21.446 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:21.446 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:21.446 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.705 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:21.964 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.964 16:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:21.964 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:21.964 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:21.964 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.964 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:21.964 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.964 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:22.222 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.222 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.222 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:22.222 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.222 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.222 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:22.223 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.482 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:22.741 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:22.741 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:22.741 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:22.741 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:22.741 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:22.741 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.741 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:22.741 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:22.999 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:23.000 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:23.000 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:23.000 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:23.000 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:23.000 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.258 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:23.258 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:23.258 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.258 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.258 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:23.258 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.258 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.258 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:23.258 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.258 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.258 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:23.258 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.259 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:23.517 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:23.517 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:23.517 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:23.517 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:23.517 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:23.517 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.517 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:23.518 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.776 rmmod nvme_tcp 00:28:23.776 rmmod nvme_fabrics 00:28:23.776 rmmod nvme_keyring 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2089767 ']' 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2089767 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2089767 ']' 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2089767 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.776 16:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2089767 00:28:24.034 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:24.034 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:24.034 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2089767' 00:28:24.034 killing process with pid 2089767 00:28:24.034 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2089767 00:28:24.034 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2089767 00:28:24.034 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:24.034 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:24.034 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:24.034 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:24.034 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:24.035 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:24.035 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:24.035 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:24.035 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:24.035 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.035 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.035 16:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:26.570 00:28:26.570 real 0m47.249s 00:28:26.570 user 2m56.355s 00:28:26.570 sys 0m20.094s 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:26.570 ************************************ 00:28:26.570 END TEST nvmf_ns_hotplug_stress 00:28:26.570 ************************************ 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:26.570 ************************************ 00:28:26.570 START TEST nvmf_delete_subsystem 00:28:26.570 ************************************ 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:26.570 * Looking for test storage... 00:28:26.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:26.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.570 --rc genhtml_branch_coverage=1 00:28:26.570 --rc genhtml_function_coverage=1 00:28:26.570 --rc genhtml_legend=1 00:28:26.570 --rc geninfo_all_blocks=1 00:28:26.570 --rc geninfo_unexecuted_blocks=1 00:28:26.570 00:28:26.570 ' 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:26.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.570 --rc genhtml_branch_coverage=1 00:28:26.570 --rc genhtml_function_coverage=1 00:28:26.570 --rc genhtml_legend=1 00:28:26.570 --rc geninfo_all_blocks=1 00:28:26.570 --rc geninfo_unexecuted_blocks=1 00:28:26.570 00:28:26.570 ' 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:26.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.570 --rc genhtml_branch_coverage=1 00:28:26.570 --rc genhtml_function_coverage=1 00:28:26.570 --rc genhtml_legend=1 00:28:26.570 --rc geninfo_all_blocks=1 00:28:26.570 --rc geninfo_unexecuted_blocks=1 00:28:26.570 00:28:26.570 ' 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:26.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.570 --rc genhtml_branch_coverage=1 00:28:26.570 --rc genhtml_function_coverage=1 00:28:26.570 --rc genhtml_legend=1 00:28:26.570 --rc geninfo_all_blocks=1 00:28:26.570 --rc geninfo_unexecuted_blocks=1 00:28:26.570 00:28:26.570 ' 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.570 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:26.571 16:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:33.140 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.140 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:33.140 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:33.141 Found net devices under 0000:86:00.0: cvl_0_0 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:33.141 Found net devices under 0000:86:00.1: cvl_0_1 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:28:33.141 00:28:33.141 --- 10.0.0.2 ping statistics --- 00:28:33.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.141 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:28:33.141 00:28:33.141 --- 10.0.0.1 ping statistics --- 00:28:33.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.141 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2099833 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2099833 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2099833 ']' 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.141 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.141 [2024-11-20 16:30:03.488905] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:33.141 [2024-11-20 16:30:03.489839] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:28:33.141 [2024-11-20 16:30:03.489874] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.141 [2024-11-20 16:30:03.569106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:33.141 [2024-11-20 16:30:03.614028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.141 [2024-11-20 16:30:03.614061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.141 [2024-11-20 16:30:03.614068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.141 [2024-11-20 16:30:03.614074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.142 [2024-11-20 16:30:03.614079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.142 [2024-11-20 16:30:03.617220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.142 [2024-11-20 16:30:03.617223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.142 [2024-11-20 16:30:03.685132] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:33.142 [2024-11-20 16:30:03.685725] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:33.142 [2024-11-20 16:30:03.685945] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:33.142 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.142 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:33.142 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:33.142 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.142 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.142 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.142 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.142 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.142 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.142 [2024-11-20 16:30:04.361952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.142 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.400 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:33.400 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.400 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.400 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.400 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.400 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.400 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.401 [2024-11-20 16:30:04.386231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.401 NULL1 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.401 Delay0 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2099921 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:33.401 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:33.401 [2024-11-20 16:30:04.497190] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:35.298 16:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.298 16:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.298 16:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 [2024-11-20 16:30:06.619956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20044a0 is same with the state(6) to be set 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 starting I/O failed: -6 00:28:35.557 Write completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.557 Read completed with error (sct=0, sc=8) 00:28:35.558 starting I/O failed: -6 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 starting I/O failed: -6 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 starting I/O failed: -6 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 starting I/O failed: -6 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 starting I/O failed: -6 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 starting I/O failed: -6 00:28:35.558 starting I/O failed: -6 00:28:35.558 starting I/O failed: -6 00:28:35.558 starting I/O failed: -6 00:28:35.558 starting I/O failed: -6 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Write completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:35.558 Read completed with error (sct=0, sc=8) 00:28:36.492 [2024-11-20 16:30:07.591964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20059a0 is same with the state(6) to be set 00:28:36.492 Read completed with error (sct=0, sc=8) 00:28:36.492 Write completed with error (sct=0, sc=8) 00:28:36.492 Read completed with error (sct=0, sc=8) 00:28:36.492 Read completed with error (sct=0, sc=8) 00:28:36.492 Write completed with error (sct=0, sc=8) 00:28:36.492 Read completed with error (sct=0, sc=8) 00:28:36.492 Write completed with error (sct=0, sc=8) 00:28:36.492 Read completed with error (sct=0, sc=8) 00:28:36.492 Read completed with error (sct=0, sc=8) 00:28:36.492 Read completed with error (sct=0, sc=8) 00:28:36.492 Read completed with error (sct=0, sc=8) 00:28:36.492 Read completed with error (sct=0, sc=8) 00:28:36.492 Read completed with error (sct=0, sc=8) 00:28:36.492 Write completed with error (sct=0, sc=8) 00:28:36.492 Read completed with error (sct=0, sc=8) 00:28:36.492 Read completed with error (sct=0, sc=8) 00:28:36.492 Write completed with error (sct=0, sc=8) 00:28:36.492 Read completed with error (sct=0, sc=8) 00:28:36.492 Write completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 [2024-11-20 16:30:07.623345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20042c0 is same with the state(6) to be set 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 [2024-11-20 16:30:07.623659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2004860 is same with the state(6) to be set 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 [2024-11-20 16:30:07.624452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb4a800d680 is same with the state(6) to be set 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 Read completed with error (sct=0, sc=8) 00:28:36.493 Write completed with error (sct=0, sc=8) 00:28:36.493 [2024-11-20 16:30:07.624988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb4a800d020 is same with the state(6) to be set 00:28:36.493 Initializing NVMe Controllers 00:28:36.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.493 Controller IO queue size 128, less than required. 00:28:36.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:36.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:36.493 Initialization complete. Launching workers. 00:28:36.493 ======================================================== 00:28:36.493 Latency(us) 00:28:36.493 Device Information : IOPS MiB/s Average min max 00:28:36.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.56 0.09 883913.66 295.44 1008553.11 00:28:36.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.63 0.08 915574.19 200.71 1010434.21 00:28:36.493 ======================================================== 00:28:36.493 Total : 337.19 0.16 899183.56 200.71 1010434.21 00:28:36.493 00:28:36.493 [2024-11-20 16:30:07.625550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20059a0 (9): Bad file descriptor 00:28:36.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:36.493 16:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.493 16:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:36.493 16:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2099921 00:28:36.493 16:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2099921 00:28:37.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2099921) - No such process 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2099921 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2099921 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2099921 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.061 [2024-11-20 16:30:08.154188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2100991 00:28:37.061 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:37.062 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:37.062 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2100991 00:28:37.062 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:37.062 [2024-11-20 16:30:08.238855] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:37.627 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:37.627 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2100991 00:28:37.627 16:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:38.192 16:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:38.192 16:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2100991 00:28:38.192 16:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:38.757 16:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:38.757 16:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2100991 00:28:38.757 16:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:39.014 16:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:39.014 16:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2100991 00:28:39.014 16:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:39.580 16:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:39.580 16:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2100991 00:28:39.580 16:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:40.145 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:40.145 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2100991 00:28:40.145 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:40.145 Initializing NVMe Controllers 00:28:40.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.145 Controller IO queue size 128, less than required. 00:28:40.145 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:40.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:40.145 Initialization complete. Launching workers. 00:28:40.145 ======================================================== 00:28:40.145 Latency(us) 00:28:40.145 Device Information : IOPS MiB/s Average min max 00:28:40.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002193.06 1000167.77 1040517.60 00:28:40.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004085.11 1000330.46 1011161.58 00:28:40.145 ======================================================== 00:28:40.145 Total : 256.00 0.12 1003139.09 1000167.77 1040517.60 00:28:40.145 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2100991 00:28:40.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2100991) - No such process 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2100991 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.712 rmmod nvme_tcp 00:28:40.712 rmmod nvme_fabrics 00:28:40.712 rmmod nvme_keyring 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2099833 ']' 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2099833 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2099833 ']' 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2099833 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2099833 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2099833' 00:28:40.712 killing process with pid 2099833 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2099833 00:28:40.712 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2099833 00:28:40.971 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:40.971 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:40.971 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:40.971 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:40.971 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:40.971 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:40.971 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:40.971 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:40.971 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:40.971 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.971 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.972 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.878 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.878 00:28:42.878 real 0m16.720s 00:28:42.878 user 0m26.226s 00:28:42.878 sys 0m6.061s 00:28:42.878 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:42.878 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:42.878 ************************************ 00:28:42.878 END TEST nvmf_delete_subsystem 00:28:42.878 ************************************ 00:28:42.878 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:42.878 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:42.878 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.878 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:43.139 ************************************ 00:28:43.139 START TEST nvmf_host_management 00:28:43.139 ************************************ 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:43.139 * Looking for test storage... 00:28:43.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:43.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.139 --rc genhtml_branch_coverage=1 00:28:43.139 --rc genhtml_function_coverage=1 00:28:43.139 --rc genhtml_legend=1 00:28:43.139 --rc geninfo_all_blocks=1 00:28:43.139 --rc geninfo_unexecuted_blocks=1 00:28:43.139 00:28:43.139 ' 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:43.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.139 --rc genhtml_branch_coverage=1 00:28:43.139 --rc genhtml_function_coverage=1 00:28:43.139 --rc genhtml_legend=1 00:28:43.139 --rc geninfo_all_blocks=1 00:28:43.139 --rc geninfo_unexecuted_blocks=1 00:28:43.139 00:28:43.139 ' 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:43.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.139 --rc genhtml_branch_coverage=1 00:28:43.139 --rc genhtml_function_coverage=1 00:28:43.139 --rc genhtml_legend=1 00:28:43.139 --rc geninfo_all_blocks=1 00:28:43.139 --rc geninfo_unexecuted_blocks=1 00:28:43.139 00:28:43.139 ' 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:43.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.139 --rc genhtml_branch_coverage=1 00:28:43.139 --rc genhtml_function_coverage=1 00:28:43.139 --rc genhtml_legend=1 00:28:43.139 --rc geninfo_all_blocks=1 00:28:43.139 --rc geninfo_unexecuted_blocks=1 00:28:43.139 00:28:43.139 ' 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.139 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:43.140 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:49.713 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:49.713 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:49.713 Found net devices under 0000:86:00.0: cvl_0_0 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:49.713 Found net devices under 0000:86:00.1: cvl_0_1 00:28:49.713 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:49.714 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:49.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:49.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:28:49.714 00:28:49.714 --- 10.0.0.2 ping statistics --- 00:28:49.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.714 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:49.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:49.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:28:49.714 00:28:49.714 --- 10.0.0.1 ping statistics --- 00:28:49.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.714 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2104982 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2104982 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2104982 ']' 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.714 16:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:49.714 [2024-11-20 16:30:20.283240] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:49.714 [2024-11-20 16:30:20.284171] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:28:49.714 [2024-11-20 16:30:20.284218] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.714 [2024-11-20 16:30:20.364759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:49.714 [2024-11-20 16:30:20.407586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:49.714 [2024-11-20 16:30:20.407621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:49.714 [2024-11-20 16:30:20.407628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:49.714 [2024-11-20 16:30:20.407633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:49.714 [2024-11-20 16:30:20.407638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:49.714 [2024-11-20 16:30:20.409136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:49.714 [2024-11-20 16:30:20.409242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:49.714 [2024-11-20 16:30:20.409347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.714 [2024-11-20 16:30:20.409348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:49.714 [2024-11-20 16:30:20.478931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:49.714 [2024-11-20 16:30:20.479893] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:49.714 [2024-11-20 16:30:20.479966] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:49.714 [2024-11-20 16:30:20.480378] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:49.714 [2024-11-20 16:30:20.480420] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:49.974 [2024-11-20 16:30:21.170138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:49.974 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:50.233 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:50.233 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.233 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.233 Malloc0 00:28:50.233 [2024-11-20 16:30:21.254382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.233 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.233 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:50.233 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.233 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.233 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2105250 00:28:50.233 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2105250 /var/tmp/bdevperf.sock 00:28:50.233 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2105250 ']' 00:28:50.233 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:50.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.234 { 00:28:50.234 "params": { 00:28:50.234 "name": "Nvme$subsystem", 00:28:50.234 "trtype": "$TEST_TRANSPORT", 00:28:50.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.234 "adrfam": "ipv4", 00:28:50.234 "trsvcid": "$NVMF_PORT", 00:28:50.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.234 "hdgst": ${hdgst:-false}, 00:28:50.234 "ddgst": ${ddgst:-false} 00:28:50.234 }, 00:28:50.234 "method": "bdev_nvme_attach_controller" 00:28:50.234 } 00:28:50.234 EOF 00:28:50.234 )") 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:50.234 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:50.234 "params": { 00:28:50.234 "name": "Nvme0", 00:28:50.234 "trtype": "tcp", 00:28:50.234 "traddr": "10.0.0.2", 00:28:50.234 "adrfam": "ipv4", 00:28:50.234 "trsvcid": "4420", 00:28:50.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.234 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:50.234 "hdgst": false, 00:28:50.234 "ddgst": false 00:28:50.234 }, 00:28:50.234 "method": "bdev_nvme_attach_controller" 00:28:50.234 }' 00:28:50.234 [2024-11-20 16:30:21.348292] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:28:50.234 [2024-11-20 16:30:21.348340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105250 ] 00:28:50.234 [2024-11-20 16:30:21.422386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.234 [2024-11-20 16:30:21.463156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.492 Running I/O for 10 seconds... 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:28:50.492 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:50.751 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:50.751 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:50.751 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:50.751 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:50.751 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.751 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:51.011 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.011 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:28:51.011 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:28:51.011 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:51.011 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:51.011 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:51.011 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:51.011 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.011 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:51.011 [2024-11-20 16:30:22.013876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.013918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.013927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.013935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.013942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.013949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.013954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.013960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.013965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.013971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.013977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.013983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.013989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.013995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.014001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.014007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.014012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141ed70 is same with the state(6) to be set 00:28:51.011 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.011 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:51.011 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.011 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:51.011 [2024-11-20 16:30:22.023585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.011 [2024-11-20 16:30:22.023616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 16:30:22.023625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.011 [2024-11-20 16:30:22.023633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 16:30:22.023640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.011 [2024-11-20 16:30:22.023652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 16:30:22.023659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.011 [2024-11-20 16:30:22.023666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 16:30:22.023672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27500 is same with the state(6) to be set 00:28:51.011 [2024-11-20 16:30:22.023893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 16:30:22.023904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 16:30:22.023916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 16:30:22.023923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 16:30:22.023931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 16:30:22.023938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 16:30:22.023947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 16:30:22.023953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 16:30:22.023961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 16:30:22.023967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.023975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.023981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.023990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.023997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 16:30:22.024543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 16:30:22.024551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.024824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 16:30:22.024831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 16:30:22.025749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:51.013 task offset: 98304 on job bdev=Nvme0n1 fails 00:28:51.013 00:28:51.013 Latency(us) 00:28:51.013 [2024-11-20T15:30:22.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.013 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.013 Job: Nvme0n1 ended in about 0.41 seconds with error 00:28:51.013 Verification LBA range: start 0x0 length 0x400 00:28:51.013 Nvme0n1 : 0.41 1892.88 118.30 157.74 0.00 30387.43 1552.58 26963.38 00:28:51.013 [2024-11-20T15:30:22.247Z] =================================================================================================================== 00:28:51.013 [2024-11-20T15:30:22.247Z] Total : 1892.88 118.30 157.74 0.00 30387.43 1552.58 26963.38 00:28:51.013 [2024-11-20 16:30:22.028095] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:51.013 [2024-11-20 16:30:22.028115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e27500 (9): Bad file descriptor 00:28:51.013 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.013 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:51.013 [2024-11-20 16:30:22.162379] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:28:51.948 16:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2105250 00:28:51.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2105250) - No such process 00:28:51.948 16:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:51.948 16:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:51.948 16:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:51.948 16:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:51.948 16:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:51.948 16:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:51.948 16:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.948 16:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.948 { 00:28:51.948 "params": { 00:28:51.948 "name": "Nvme$subsystem", 00:28:51.948 "trtype": "$TEST_TRANSPORT", 00:28:51.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.948 "adrfam": "ipv4", 00:28:51.948 "trsvcid": "$NVMF_PORT", 00:28:51.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.948 "hdgst": ${hdgst:-false}, 00:28:51.948 "ddgst": ${ddgst:-false} 00:28:51.948 }, 00:28:51.948 "method": "bdev_nvme_attach_controller" 00:28:51.948 } 00:28:51.948 EOF 00:28:51.948 )") 00:28:51.948 16:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:51.948 16:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:51.948 16:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:51.948 16:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:51.948 "params": { 00:28:51.948 "name": "Nvme0", 00:28:51.948 "trtype": "tcp", 00:28:51.948 "traddr": "10.0.0.2", 00:28:51.948 "adrfam": "ipv4", 00:28:51.948 "trsvcid": "4420", 00:28:51.948 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.948 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:51.948 "hdgst": false, 00:28:51.948 "ddgst": false 00:28:51.948 }, 00:28:51.948 "method": "bdev_nvme_attach_controller" 00:28:51.948 }' 00:28:51.948 [2024-11-20 16:30:23.086707] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:28:51.948 [2024-11-20 16:30:23.086753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105501 ] 00:28:51.948 [2024-11-20 16:30:23.160448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.206 [2024-11-20 16:30:23.199883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.464 Running I/O for 1 seconds... 00:28:53.400 1984.00 IOPS, 124.00 MiB/s 00:28:53.400 Latency(us) 00:28:53.400 [2024-11-20T15:30:24.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.400 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.400 Verification LBA range: start 0x0 length 0x400 00:28:53.400 Nvme0n1 : 1.01 2043.99 127.75 0.00 0.00 30725.47 1505.77 27337.87 00:28:53.400 [2024-11-20T15:30:24.634Z] =================================================================================================================== 00:28:53.400 [2024-11-20T15:30:24.634Z] Total : 2043.99 127.75 0.00 0.00 30725.47 1505.77 27337.87 00:28:53.700 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:53.701 rmmod nvme_tcp 00:28:53.701 rmmod nvme_fabrics 00:28:53.701 rmmod nvme_keyring 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2104982 ']' 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2104982 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2104982 ']' 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2104982 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2104982 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2104982' 00:28:53.701 killing process with pid 2104982 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2104982 00:28:53.701 16:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2104982 00:28:54.035 [2024-11-20 16:30:25.010190] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:54.035 16:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:54.035 16:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:54.035 16:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:54.035 16:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:54.035 16:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:54.035 16:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:54.035 16:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:54.035 16:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:54.035 16:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:54.035 16:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.035 16:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.035 16:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.940 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:55.940 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:55.940 00:28:55.940 real 0m12.980s 00:28:55.940 user 0m18.314s 00:28:55.940 sys 0m6.375s 00:28:55.940 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:55.940 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:55.940 ************************************ 00:28:55.940 END TEST nvmf_host_management 00:28:55.940 ************************************ 00:28:55.940 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:55.940 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:55.940 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:55.940 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:56.200 ************************************ 00:28:56.200 START TEST nvmf_lvol 00:28:56.200 ************************************ 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:56.200 * Looking for test storage... 00:28:56.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:56.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.200 --rc genhtml_branch_coverage=1 00:28:56.200 --rc genhtml_function_coverage=1 00:28:56.200 --rc genhtml_legend=1 00:28:56.200 --rc geninfo_all_blocks=1 00:28:56.200 --rc geninfo_unexecuted_blocks=1 00:28:56.200 00:28:56.200 ' 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:56.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.200 --rc genhtml_branch_coverage=1 00:28:56.200 --rc genhtml_function_coverage=1 00:28:56.200 --rc genhtml_legend=1 00:28:56.200 --rc geninfo_all_blocks=1 00:28:56.200 --rc geninfo_unexecuted_blocks=1 00:28:56.200 00:28:56.200 ' 00:28:56.200 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:56.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.200 --rc genhtml_branch_coverage=1 00:28:56.200 --rc genhtml_function_coverage=1 00:28:56.200 --rc genhtml_legend=1 00:28:56.201 --rc geninfo_all_blocks=1 00:28:56.201 --rc geninfo_unexecuted_blocks=1 00:28:56.201 00:28:56.201 ' 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:56.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.201 --rc genhtml_branch_coverage=1 00:28:56.201 --rc genhtml_function_coverage=1 00:28:56.201 --rc genhtml_legend=1 00:28:56.201 --rc geninfo_all_blocks=1 00:28:56.201 --rc geninfo_unexecuted_blocks=1 00:28:56.201 00:28:56.201 ' 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:56.201 16:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.773 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:02.774 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:02.774 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:02.774 Found net devices under 0000:86:00.0: cvl_0_0 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:02.774 Found net devices under 0000:86:00.1: cvl_0_1 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:02.774 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:02.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:29:02.774 00:29:02.774 --- 10.0.0.2 ping statistics --- 00:29:02.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.774 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:02.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:29:02.774 00:29:02.774 --- 10.0.0.1 ping statistics --- 00:29:02.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.774 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:02.774 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2109262 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2109262 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2109262 ']' 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:02.775 [2024-11-20 16:30:33.356359] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:02.775 [2024-11-20 16:30:33.357328] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:29:02.775 [2024-11-20 16:30:33.357366] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.775 [2024-11-20 16:30:33.435653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:02.775 [2024-11-20 16:30:33.477824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.775 [2024-11-20 16:30:33.477861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.775 [2024-11-20 16:30:33.477868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.775 [2024-11-20 16:30:33.477874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.775 [2024-11-20 16:30:33.477878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.775 [2024-11-20 16:30:33.479227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.775 [2024-11-20 16:30:33.479293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.775 [2024-11-20 16:30:33.479294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.775 [2024-11-20 16:30:33.548300] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:02.775 [2024-11-20 16:30:33.549155] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:02.775 [2024-11-20 16:30:33.549276] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:02.775 [2024-11-20 16:30:33.549446] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:02.775 [2024-11-20 16:30:33.784086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.775 16:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:03.035 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:03.035 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:03.294 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:03.294 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:03.294 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:03.553 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f600c94e-6835-4edb-8446-9470375a09a1 00:29:03.553 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f600c94e-6835-4edb-8446-9470375a09a1 lvol 20 00:29:03.812 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fad464d7-d755-4eb3-80f5-e5283ae4d75b 00:29:03.812 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:03.812 16:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fad464d7-d755-4eb3-80f5-e5283ae4d75b 00:29:04.070 16:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:04.327 [2024-11-20 16:30:35.403996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.327 16:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:04.584 16:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2109744 00:29:04.584 16:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:04.584 16:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:05.515 16:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fad464d7-d755-4eb3-80f5-e5283ae4d75b MY_SNAPSHOT 00:29:05.772 16:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=386ef1bf-cd39-4bb2-957d-7b953d490938 00:29:05.772 16:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fad464d7-d755-4eb3-80f5-e5283ae4d75b 30 00:29:06.029 16:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 386ef1bf-cd39-4bb2-957d-7b953d490938 MY_CLONE 00:29:06.285 16:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c99147cd-30aa-4fc1-b516-b2217e8c2481 00:29:06.285 16:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c99147cd-30aa-4fc1-b516-b2217e8c2481 00:29:06.849 16:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2109744 00:29:14.949 Initializing NVMe Controllers 00:29:14.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:14.949 Controller IO queue size 128, less than required. 00:29:14.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:14.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:14.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:14.950 Initialization complete. Launching workers. 00:29:14.950 ======================================================== 00:29:14.950 Latency(us) 00:29:14.950 Device Information : IOPS MiB/s Average min max 00:29:14.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12132.50 47.39 10551.50 2150.14 48009.05 00:29:14.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12002.60 46.89 10664.85 3606.46 56279.65 00:29:14.950 ======================================================== 00:29:14.950 Total : 24135.10 94.28 10607.87 2150.14 56279.65 00:29:14.950 00:29:14.950 16:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:14.950 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fad464d7-d755-4eb3-80f5-e5283ae4d75b 00:29:15.208 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f600c94e-6835-4edb-8446-9470375a09a1 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.466 rmmod nvme_tcp 00:29:15.466 rmmod nvme_fabrics 00:29:15.466 rmmod nvme_keyring 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2109262 ']' 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2109262 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2109262 ']' 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2109262 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2109262 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2109262' 00:29:15.466 killing process with pid 2109262 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2109262 00:29:15.466 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2109262 00:29:15.724 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.724 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.724 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.724 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:15.724 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:15.724 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.724 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.724 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.724 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.724 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.724 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.724 16:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.259 16:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.259 00:29:18.259 real 0m21.719s 00:29:18.259 user 0m55.378s 00:29:18.259 sys 0m9.721s 00:29:18.259 16:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.259 16:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:18.259 ************************************ 00:29:18.259 END TEST nvmf_lvol 00:29:18.259 ************************************ 00:29:18.259 16:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:18.259 16:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:18.259 16:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.259 16:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:18.259 ************************************ 00:29:18.259 START TEST nvmf_lvs_grow 00:29:18.259 ************************************ 00:29:18.259 16:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:18.259 * Looking for test storage... 00:29:18.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:18.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.259 --rc genhtml_branch_coverage=1 00:29:18.259 --rc genhtml_function_coverage=1 00:29:18.259 --rc genhtml_legend=1 00:29:18.259 --rc geninfo_all_blocks=1 00:29:18.259 --rc geninfo_unexecuted_blocks=1 00:29:18.259 00:29:18.259 ' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:18.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.259 --rc genhtml_branch_coverage=1 00:29:18.259 --rc genhtml_function_coverage=1 00:29:18.259 --rc genhtml_legend=1 00:29:18.259 --rc geninfo_all_blocks=1 00:29:18.259 --rc geninfo_unexecuted_blocks=1 00:29:18.259 00:29:18.259 ' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:18.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.259 --rc genhtml_branch_coverage=1 00:29:18.259 --rc genhtml_function_coverage=1 00:29:18.259 --rc genhtml_legend=1 00:29:18.259 --rc geninfo_all_blocks=1 00:29:18.259 --rc geninfo_unexecuted_blocks=1 00:29:18.259 00:29:18.259 ' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:18.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.259 --rc genhtml_branch_coverage=1 00:29:18.259 --rc genhtml_function_coverage=1 00:29:18.259 --rc genhtml_legend=1 00:29:18.259 --rc geninfo_all_blocks=1 00:29:18.259 --rc geninfo_unexecuted_blocks=1 00:29:18.259 00:29:18.259 ' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.259 16:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:24.830 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:24.830 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.830 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:24.831 Found net devices under 0000:86:00.0: cvl_0_0 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:24.831 Found net devices under 0000:86:00.1: cvl_0_1 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.831 16:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:29:24.831 00:29:24.831 --- 10.0.0.2 ping statistics --- 00:29:24.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.831 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:29:24.831 00:29:24.831 --- 10.0.0.1 ping statistics --- 00:29:24.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.831 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2114878 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2114878 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2114878 ']' 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.831 [2024-11-20 16:30:55.187209] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:24.831 [2024-11-20 16:30:55.188084] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:29:24.831 [2024-11-20 16:30:55.188116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.831 [2024-11-20 16:30:55.268140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.831 [2024-11-20 16:30:55.309942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.831 [2024-11-20 16:30:55.309977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.831 [2024-11-20 16:30:55.309984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.831 [2024-11-20 16:30:55.309990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.831 [2024-11-20 16:30:55.309995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.831 [2024-11-20 16:30:55.310556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.831 [2024-11-20 16:30:55.379921] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:24.831 [2024-11-20 16:30:55.380148] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.831 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:24.832 [2024-11-20 16:30:55.611189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.832 ************************************ 00:29:24.832 START TEST lvs_grow_clean 00:29:24.832 ************************************ 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:24.832 16:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:25.091 16:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=437ebb4f-0622-4567-ac10-85d2a5e3f67e 00:29:25.091 16:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 437ebb4f-0622-4567-ac10-85d2a5e3f67e 00:29:25.091 16:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:25.091 16:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:25.091 16:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:25.091 16:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 437ebb4f-0622-4567-ac10-85d2a5e3f67e lvol 150 00:29:25.350 16:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=324505aa-b52e-41d2-b45e-3baf5e18ff1f 00:29:25.350 16:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:25.350 16:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:25.609 [2024-11-20 16:30:56.646937] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:25.609 [2024-11-20 16:30:56.647063] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:25.609 true 00:29:25.609 16:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 437ebb4f-0622-4567-ac10-85d2a5e3f67e 00:29:25.609 16:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:25.868 16:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:25.868 16:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:25.868 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 324505aa-b52e-41d2-b45e-3baf5e18ff1f 00:29:26.126 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:26.385 [2024-11-20 16:30:57.399440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.385 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:26.644 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2115374 00:29:26.644 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:26.644 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:26.644 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2115374 /var/tmp/bdevperf.sock 00:29:26.644 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2115374 ']' 00:29:26.644 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:26.644 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.644 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:26.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:26.644 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.644 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:26.644 [2024-11-20 16:30:57.669851] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:29:26.644 [2024-11-20 16:30:57.669899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115374 ] 00:29:26.644 [2024-11-20 16:30:57.745047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.644 [2024-11-20 16:30:57.786879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.903 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.903 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:26.903 16:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:26.903 Nvme0n1 00:29:26.903 16:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:27.162 [ 00:29:27.162 { 00:29:27.162 "name": "Nvme0n1", 00:29:27.162 "aliases": [ 00:29:27.162 "324505aa-b52e-41d2-b45e-3baf5e18ff1f" 00:29:27.162 ], 00:29:27.162 "product_name": "NVMe disk", 00:29:27.162 "block_size": 4096, 00:29:27.162 "num_blocks": 38912, 00:29:27.162 "uuid": "324505aa-b52e-41d2-b45e-3baf5e18ff1f", 00:29:27.162 "numa_id": 1, 00:29:27.162 "assigned_rate_limits": { 00:29:27.162 "rw_ios_per_sec": 0, 00:29:27.162 "rw_mbytes_per_sec": 0, 00:29:27.162 "r_mbytes_per_sec": 0, 00:29:27.162 "w_mbytes_per_sec": 0 00:29:27.162 }, 00:29:27.162 "claimed": false, 00:29:27.162 "zoned": false, 00:29:27.162 "supported_io_types": { 00:29:27.162 "read": true, 00:29:27.162 "write": true, 00:29:27.162 "unmap": true, 00:29:27.162 "flush": true, 00:29:27.162 "reset": true, 00:29:27.162 "nvme_admin": true, 00:29:27.162 "nvme_io": true, 00:29:27.162 "nvme_io_md": false, 00:29:27.162 "write_zeroes": true, 00:29:27.162 "zcopy": false, 00:29:27.162 "get_zone_info": false, 00:29:27.162 "zone_management": false, 00:29:27.162 "zone_append": false, 00:29:27.162 "compare": true, 00:29:27.162 "compare_and_write": true, 00:29:27.162 "abort": true, 00:29:27.162 "seek_hole": false, 00:29:27.162 "seek_data": false, 00:29:27.162 "copy": true, 00:29:27.162 "nvme_iov_md": false 00:29:27.162 }, 00:29:27.162 "memory_domains": [ 00:29:27.162 { 00:29:27.162 "dma_device_id": "system", 00:29:27.162 "dma_device_type": 1 00:29:27.162 } 00:29:27.162 ], 00:29:27.162 "driver_specific": { 00:29:27.162 "nvme": [ 00:29:27.162 { 00:29:27.162 "trid": { 00:29:27.162 "trtype": "TCP", 00:29:27.162 "adrfam": "IPv4", 00:29:27.162 "traddr": "10.0.0.2", 00:29:27.162 "trsvcid": "4420", 00:29:27.162 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:27.162 }, 00:29:27.162 "ctrlr_data": { 00:29:27.162 "cntlid": 1, 00:29:27.162 "vendor_id": "0x8086", 00:29:27.162 "model_number": "SPDK bdev Controller", 00:29:27.162 "serial_number": "SPDK0", 00:29:27.162 "firmware_revision": "25.01", 00:29:27.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:27.162 "oacs": { 00:29:27.162 "security": 0, 00:29:27.162 "format": 0, 00:29:27.162 "firmware": 0, 00:29:27.162 "ns_manage": 0 00:29:27.162 }, 00:29:27.162 "multi_ctrlr": true, 00:29:27.162 "ana_reporting": false 00:29:27.162 }, 00:29:27.162 "vs": { 00:29:27.162 "nvme_version": "1.3" 00:29:27.162 }, 00:29:27.162 "ns_data": { 00:29:27.162 "id": 1, 00:29:27.162 "can_share": true 00:29:27.162 } 00:29:27.162 } 00:29:27.162 ], 00:29:27.162 "mp_policy": "active_passive" 00:29:27.162 } 00:29:27.162 } 00:29:27.162 ] 00:29:27.162 16:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2115486 00:29:27.162 16:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:27.162 16:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:27.420 Running I/O for 10 seconds... 00:29:28.354 Latency(us) 00:29:28.354 [2024-11-20T15:30:59.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.354 Nvme0n1 : 1.00 22543.00 88.06 0.00 0.00 0.00 0.00 0.00 00:29:28.354 [2024-11-20T15:30:59.588Z] =================================================================================================================== 00:29:28.354 [2024-11-20T15:30:59.588Z] Total : 22543.00 88.06 0.00 0.00 0.00 0.00 0.00 00:29:28.354 00:29:29.288 16:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 437ebb4f-0622-4567-ac10-85d2a5e3f67e 00:29:29.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.288 Nvme0n1 : 2.00 22630.50 88.40 0.00 0.00 0.00 0.00 0.00 00:29:29.288 [2024-11-20T15:31:00.522Z] =================================================================================================================== 00:29:29.288 [2024-11-20T15:31:00.522Z] Total : 22630.50 88.40 0.00 0.00 0.00 0.00 0.00 00:29:29.288 00:29:29.288 true 00:29:29.547 16:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 437ebb4f-0622-4567-ac10-85d2a5e3f67e 00:29:29.547 16:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:29.547 16:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:29.547 16:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:29.547 16:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2115486 00:29:30.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.480 Nvme0n1 : 3.00 22749.33 88.86 0.00 0.00 0.00 0.00 0.00 00:29:30.480 [2024-11-20T15:31:01.714Z] =================================================================================================================== 00:29:30.480 [2024-11-20T15:31:01.714Z] Total : 22749.33 88.86 0.00 0.00 0.00 0.00 0.00 00:29:30.480 00:29:31.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.417 Nvme0n1 : 4.00 22904.00 89.47 0.00 0.00 0.00 0.00 0.00 00:29:31.417 [2024-11-20T15:31:02.651Z] =================================================================================================================== 00:29:31.417 [2024-11-20T15:31:02.651Z] Total : 22904.00 89.47 0.00 0.00 0.00 0.00 0.00 00:29:31.417 00:29:32.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.351 Nvme0n1 : 5.00 22996.80 89.83 0.00 0.00 0.00 0.00 0.00 00:29:32.351 [2024-11-20T15:31:03.585Z] =================================================================================================================== 00:29:32.351 [2024-11-20T15:31:03.585Z] Total : 22996.80 89.83 0.00 0.00 0.00 0.00 0.00 00:29:32.351 00:29:33.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.286 Nvme0n1 : 6.00 23058.67 90.07 0.00 0.00 0.00 0.00 0.00 00:29:33.286 [2024-11-20T15:31:04.520Z] =================================================================================================================== 00:29:33.286 [2024-11-20T15:31:04.520Z] Total : 23058.67 90.07 0.00 0.00 0.00 0.00 0.00 00:29:33.286 00:29:34.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.222 Nvme0n1 : 7.00 23102.86 90.25 0.00 0.00 0.00 0.00 0.00 00:29:34.222 [2024-11-20T15:31:05.456Z] =================================================================================================================== 00:29:34.222 [2024-11-20T15:31:05.456Z] Total : 23102.86 90.25 0.00 0.00 0.00 0.00 0.00 00:29:34.222 00:29:35.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.596 Nvme0n1 : 8.00 23136.00 90.38 0.00 0.00 0.00 0.00 0.00 00:29:35.596 [2024-11-20T15:31:06.830Z] =================================================================================================================== 00:29:35.596 [2024-11-20T15:31:06.830Z] Total : 23136.00 90.38 0.00 0.00 0.00 0.00 0.00 00:29:35.596 00:29:36.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.530 Nvme0n1 : 9.00 23163.67 90.48 0.00 0.00 0.00 0.00 0.00 00:29:36.530 [2024-11-20T15:31:07.764Z] =================================================================================================================== 00:29:36.530 [2024-11-20T15:31:07.764Z] Total : 23163.67 90.48 0.00 0.00 0.00 0.00 0.00 00:29:36.530 00:29:37.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.465 Nvme0n1 : 10.00 23190.50 90.59 0.00 0.00 0.00 0.00 0.00 00:29:37.465 [2024-11-20T15:31:08.699Z] =================================================================================================================== 00:29:37.465 [2024-11-20T15:31:08.699Z] Total : 23190.50 90.59 0.00 0.00 0.00 0.00 0.00 00:29:37.465 00:29:37.465 00:29:37.465 Latency(us) 00:29:37.465 [2024-11-20T15:31:08.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.465 Nvme0n1 : 10.00 23191.01 90.59 0.00 0.00 5516.05 3089.55 27462.70 00:29:37.465 [2024-11-20T15:31:08.699Z] =================================================================================================================== 00:29:37.465 [2024-11-20T15:31:08.699Z] Total : 23191.01 90.59 0.00 0.00 5516.05 3089.55 27462.70 00:29:37.465 { 00:29:37.465 "results": [ 00:29:37.465 { 00:29:37.465 "job": "Nvme0n1", 00:29:37.465 "core_mask": "0x2", 00:29:37.465 "workload": "randwrite", 00:29:37.465 "status": "finished", 00:29:37.465 "queue_depth": 128, 00:29:37.465 "io_size": 4096, 00:29:37.465 "runtime": 10.002541, 00:29:37.465 "iops": 23191.007165079354, 00:29:37.465 "mibps": 90.58987173859123, 00:29:37.465 "io_failed": 0, 00:29:37.465 "io_timeout": 0, 00:29:37.465 "avg_latency_us": 5516.050560220588, 00:29:37.465 "min_latency_us": 3089.554285714286, 00:29:37.465 "max_latency_us": 27462.704761904763 00:29:37.465 } 00:29:37.465 ], 00:29:37.465 "core_count": 1 00:29:37.465 } 00:29:37.465 16:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2115374 00:29:37.465 16:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2115374 ']' 00:29:37.465 16:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2115374 00:29:37.465 16:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:37.465 16:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.465 16:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2115374 00:29:37.465 16:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:37.465 16:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:37.465 16:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2115374' 00:29:37.465 killing process with pid 2115374 00:29:37.465 16:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2115374 00:29:37.465 Received shutdown signal, test time was about 10.000000 seconds 00:29:37.465 00:29:37.465 Latency(us) 00:29:37.465 [2024-11-20T15:31:08.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.465 [2024-11-20T15:31:08.699Z] =================================================================================================================== 00:29:37.465 [2024-11-20T15:31:08.699Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.465 16:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2115374 00:29:37.466 16:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.724 16:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:37.982 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:37.982 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 437ebb4f-0622-4567-ac10-85d2a5e3f67e 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:38.242 [2024-11-20 16:31:09.427011] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 437ebb4f-0622-4567-ac10-85d2a5e3f67e 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 437ebb4f-0622-4567-ac10-85d2a5e3f67e 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:38.242 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 437ebb4f-0622-4567-ac10-85d2a5e3f67e 00:29:38.501 request: 00:29:38.501 { 00:29:38.501 "uuid": "437ebb4f-0622-4567-ac10-85d2a5e3f67e", 00:29:38.501 "method": "bdev_lvol_get_lvstores", 00:29:38.501 "req_id": 1 00:29:38.501 } 00:29:38.501 Got JSON-RPC error response 00:29:38.501 response: 00:29:38.501 { 00:29:38.501 "code": -19, 00:29:38.501 "message": "No such device" 00:29:38.501 } 00:29:38.501 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:38.501 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:38.501 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:38.501 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:38.501 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:38.760 aio_bdev 00:29:38.760 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 324505aa-b52e-41d2-b45e-3baf5e18ff1f 00:29:38.760 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=324505aa-b52e-41d2-b45e-3baf5e18ff1f 00:29:38.760 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:38.760 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:38.760 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:38.760 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:38.760 16:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:39.019 16:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 324505aa-b52e-41d2-b45e-3baf5e18ff1f -t 2000 00:29:39.019 [ 00:29:39.019 { 00:29:39.019 "name": "324505aa-b52e-41d2-b45e-3baf5e18ff1f", 00:29:39.019 "aliases": [ 00:29:39.019 "lvs/lvol" 00:29:39.019 ], 00:29:39.019 "product_name": "Logical Volume", 00:29:39.019 "block_size": 4096, 00:29:39.019 "num_blocks": 38912, 00:29:39.019 "uuid": "324505aa-b52e-41d2-b45e-3baf5e18ff1f", 00:29:39.019 "assigned_rate_limits": { 00:29:39.019 "rw_ios_per_sec": 0, 00:29:39.019 "rw_mbytes_per_sec": 0, 00:29:39.019 "r_mbytes_per_sec": 0, 00:29:39.019 "w_mbytes_per_sec": 0 00:29:39.019 }, 00:29:39.019 "claimed": false, 00:29:39.019 "zoned": false, 00:29:39.019 "supported_io_types": { 00:29:39.019 "read": true, 00:29:39.019 "write": true, 00:29:39.019 "unmap": true, 00:29:39.019 "flush": false, 00:29:39.019 "reset": true, 00:29:39.019 "nvme_admin": false, 00:29:39.019 "nvme_io": false, 00:29:39.019 "nvme_io_md": false, 00:29:39.019 "write_zeroes": true, 00:29:39.019 "zcopy": false, 00:29:39.019 "get_zone_info": false, 00:29:39.019 "zone_management": false, 00:29:39.019 "zone_append": false, 00:29:39.019 "compare": false, 00:29:39.019 "compare_and_write": false, 00:29:39.019 "abort": false, 00:29:39.019 "seek_hole": true, 00:29:39.019 "seek_data": true, 00:29:39.019 "copy": false, 00:29:39.019 "nvme_iov_md": false 00:29:39.019 }, 00:29:39.019 "driver_specific": { 00:29:39.019 "lvol": { 00:29:39.019 "lvol_store_uuid": "437ebb4f-0622-4567-ac10-85d2a5e3f67e", 00:29:39.019 "base_bdev": "aio_bdev", 00:29:39.019 "thin_provision": false, 00:29:39.019 "num_allocated_clusters": 38, 00:29:39.019 "snapshot": false, 00:29:39.019 "clone": false, 00:29:39.019 "esnap_clone": false 00:29:39.019 } 00:29:39.019 } 00:29:39.019 } 00:29:39.019 ] 00:29:39.019 16:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:39.019 16:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 437ebb4f-0622-4567-ac10-85d2a5e3f67e 00:29:39.019 16:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:39.278 16:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:39.278 16:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:39.278 16:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 437ebb4f-0622-4567-ac10-85d2a5e3f67e 00:29:39.536 16:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:39.536 16:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 324505aa-b52e-41d2-b45e-3baf5e18ff1f 00:29:39.795 16:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 437ebb4f-0622-4567-ac10-85d2a5e3f67e 00:29:39.795 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:40.054 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:40.054 00:29:40.054 real 0m15.566s 00:29:40.054 user 0m15.100s 00:29:40.054 sys 0m1.499s 00:29:40.054 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.054 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:40.054 ************************************ 00:29:40.054 END TEST lvs_grow_clean 00:29:40.054 ************************************ 00:29:40.054 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:40.054 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:40.054 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.054 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:40.313 ************************************ 00:29:40.313 START TEST lvs_grow_dirty 00:29:40.313 ************************************ 00:29:40.313 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:40.313 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:40.313 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:40.313 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:40.313 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:40.313 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:40.313 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:40.313 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:40.313 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:40.313 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:40.313 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:40.313 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:40.571 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:40.571 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:40.571 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:40.830 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:40.830 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:40.830 16:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 lvol 150 00:29:41.089 16:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4f4e3428-657a-443b-9e50-61757a8b1e29 00:29:41.089 16:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:41.089 16:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:41.089 [2024-11-20 16:31:12.270937] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:41.089 [2024-11-20 16:31:12.271065] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:41.089 true 00:29:41.089 16:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:41.089 16:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:41.348 16:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:41.348 16:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:41.607 16:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4f4e3428-657a-443b-9e50-61757a8b1e29 00:29:41.607 16:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:41.866 [2024-11-20 16:31:12.971393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.866 16:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:42.125 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2117954 00:29:42.125 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:42.125 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:42.125 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2117954 /var/tmp/bdevperf.sock 00:29:42.125 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2117954 ']' 00:29:42.125 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:42.125 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.125 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:42.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:42.125 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.125 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:42.125 [2024-11-20 16:31:13.218128] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:29:42.125 [2024-11-20 16:31:13.218175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117954 ] 00:29:42.125 [2024-11-20 16:31:13.291195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.125 [2024-11-20 16:31:13.333222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.384 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.384 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:42.384 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:42.643 Nvme0n1 00:29:42.643 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:42.643 [ 00:29:42.643 { 00:29:42.643 "name": "Nvme0n1", 00:29:42.643 "aliases": [ 00:29:42.643 "4f4e3428-657a-443b-9e50-61757a8b1e29" 00:29:42.643 ], 00:29:42.643 "product_name": "NVMe disk", 00:29:42.643 "block_size": 4096, 00:29:42.643 "num_blocks": 38912, 00:29:42.643 "uuid": "4f4e3428-657a-443b-9e50-61757a8b1e29", 00:29:42.643 "numa_id": 1, 00:29:42.643 "assigned_rate_limits": { 00:29:42.643 "rw_ios_per_sec": 0, 00:29:42.643 "rw_mbytes_per_sec": 0, 00:29:42.643 "r_mbytes_per_sec": 0, 00:29:42.643 "w_mbytes_per_sec": 0 00:29:42.643 }, 00:29:42.643 "claimed": false, 00:29:42.643 "zoned": false, 00:29:42.643 "supported_io_types": { 00:29:42.643 "read": true, 00:29:42.643 "write": true, 00:29:42.643 "unmap": true, 00:29:42.643 "flush": true, 00:29:42.643 "reset": true, 00:29:42.643 "nvme_admin": true, 00:29:42.643 "nvme_io": true, 00:29:42.643 "nvme_io_md": false, 00:29:42.643 "write_zeroes": true, 00:29:42.643 "zcopy": false, 00:29:42.643 "get_zone_info": false, 00:29:42.643 "zone_management": false, 00:29:42.643 "zone_append": false, 00:29:42.643 "compare": true, 00:29:42.643 "compare_and_write": true, 00:29:42.643 "abort": true, 00:29:42.643 "seek_hole": false, 00:29:42.643 "seek_data": false, 00:29:42.643 "copy": true, 00:29:42.643 "nvme_iov_md": false 00:29:42.643 }, 00:29:42.643 "memory_domains": [ 00:29:42.643 { 00:29:42.643 "dma_device_id": "system", 00:29:42.643 "dma_device_type": 1 00:29:42.643 } 00:29:42.643 ], 00:29:42.643 "driver_specific": { 00:29:42.643 "nvme": [ 00:29:42.643 { 00:29:42.643 "trid": { 00:29:42.643 "trtype": "TCP", 00:29:42.643 "adrfam": "IPv4", 00:29:42.643 "traddr": "10.0.0.2", 00:29:42.643 "trsvcid": "4420", 00:29:42.643 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:42.643 }, 00:29:42.643 "ctrlr_data": { 00:29:42.643 "cntlid": 1, 00:29:42.643 "vendor_id": "0x8086", 00:29:42.643 "model_number": "SPDK bdev Controller", 00:29:42.643 "serial_number": "SPDK0", 00:29:42.643 "firmware_revision": "25.01", 00:29:42.643 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.643 "oacs": { 00:29:42.643 "security": 0, 00:29:42.643 "format": 0, 00:29:42.643 "firmware": 0, 00:29:42.643 "ns_manage": 0 00:29:42.643 }, 00:29:42.643 "multi_ctrlr": true, 00:29:42.643 "ana_reporting": false 00:29:42.643 }, 00:29:42.643 "vs": { 00:29:42.643 "nvme_version": "1.3" 00:29:42.643 }, 00:29:42.643 "ns_data": { 00:29:42.643 "id": 1, 00:29:42.643 "can_share": true 00:29:42.643 } 00:29:42.643 } 00:29:42.643 ], 00:29:42.643 "mp_policy": "active_passive" 00:29:42.643 } 00:29:42.643 } 00:29:42.643 ] 00:29:42.643 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2117966 00:29:42.643 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:42.643 16:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:42.902 Running I/O for 10 seconds... 00:29:43.835 Latency(us) 00:29:43.835 [2024-11-20T15:31:15.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.835 Nvme0n1 : 1.00 22543.00 88.06 0.00 0.00 0.00 0.00 0.00 00:29:43.835 [2024-11-20T15:31:15.069Z] =================================================================================================================== 00:29:43.835 [2024-11-20T15:31:15.069Z] Total : 22543.00 88.06 0.00 0.00 0.00 0.00 0.00 00:29:43.835 00:29:44.770 16:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:44.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.770 Nvme0n1 : 2.00 22892.00 89.42 0.00 0.00 0.00 0.00 0.00 00:29:44.770 [2024-11-20T15:31:16.004Z] =================================================================================================================== 00:29:44.770 [2024-11-20T15:31:16.004Z] Total : 22892.00 89.42 0.00 0.00 0.00 0.00 0.00 00:29:44.770 00:29:45.029 true 00:29:45.029 16:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:45.029 16:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:45.286 16:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:45.286 16:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:45.287 16:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2117966 00:29:45.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.853 Nvme0n1 : 3.00 22966.00 89.71 0.00 0.00 0.00 0.00 0.00 00:29:45.853 [2024-11-20T15:31:17.087Z] =================================================================================================================== 00:29:45.853 [2024-11-20T15:31:17.087Z] Total : 22966.00 89.71 0.00 0.00 0.00 0.00 0.00 00:29:45.853 00:29:46.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:46.786 Nvme0n1 : 4.00 23066.50 90.10 0.00 0.00 0.00 0.00 0.00 00:29:46.786 [2024-11-20T15:31:18.020Z] =================================================================================================================== 00:29:46.786 [2024-11-20T15:31:18.020Z] Total : 23066.50 90.10 0.00 0.00 0.00 0.00 0.00 00:29:46.786 00:29:48.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.171 Nvme0n1 : 5.00 23126.80 90.34 0.00 0.00 0.00 0.00 0.00 00:29:48.171 [2024-11-20T15:31:19.405Z] =================================================================================================================== 00:29:48.171 [2024-11-20T15:31:19.405Z] Total : 23126.80 90.34 0.00 0.00 0.00 0.00 0.00 00:29:48.171 00:29:48.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.746 Nvme0n1 : 6.00 23172.67 90.52 0.00 0.00 0.00 0.00 0.00 00:29:48.746 [2024-11-20T15:31:19.980Z] =================================================================================================================== 00:29:48.746 [2024-11-20T15:31:19.980Z] Total : 23172.67 90.52 0.00 0.00 0.00 0.00 0.00 00:29:48.746 00:29:50.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.120 Nvme0n1 : 7.00 23164.29 90.49 0.00 0.00 0.00 0.00 0.00 00:29:50.120 [2024-11-20T15:31:21.354Z] =================================================================================================================== 00:29:50.120 [2024-11-20T15:31:21.354Z] Total : 23164.29 90.49 0.00 0.00 0.00 0.00 0.00 00:29:50.120 00:29:50.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.801 Nvme0n1 : 8.00 23197.75 90.62 0.00 0.00 0.00 0.00 0.00 00:29:50.801 [2024-11-20T15:31:22.035Z] =================================================================================================================== 00:29:50.801 [2024-11-20T15:31:22.035Z] Total : 23197.75 90.62 0.00 0.00 0.00 0.00 0.00 00:29:50.801 00:29:51.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:51.761 Nvme0n1 : 9.00 23229.11 90.74 0.00 0.00 0.00 0.00 0.00 00:29:51.761 [2024-11-20T15:31:22.995Z] =================================================================================================================== 00:29:51.761 [2024-11-20T15:31:22.995Z] Total : 23229.11 90.74 0.00 0.00 0.00 0.00 0.00 00:29:51.761 00:29:53.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.136 Nvme0n1 : 10.00 23255.70 90.84 0.00 0.00 0.00 0.00 0.00 00:29:53.136 [2024-11-20T15:31:24.370Z] =================================================================================================================== 00:29:53.136 [2024-11-20T15:31:24.370Z] Total : 23255.70 90.84 0.00 0.00 0.00 0.00 0.00 00:29:53.136 00:29:53.136 00:29:53.136 Latency(us) 00:29:53.136 [2024-11-20T15:31:24.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.136 Nvme0n1 : 10.01 23255.57 90.84 0.00 0.00 5501.20 3120.76 25090.93 00:29:53.136 [2024-11-20T15:31:24.370Z] =================================================================================================================== 00:29:53.136 [2024-11-20T15:31:24.370Z] Total : 23255.57 90.84 0.00 0.00 5501.20 3120.76 25090.93 00:29:53.136 { 00:29:53.136 "results": [ 00:29:53.136 { 00:29:53.136 "job": "Nvme0n1", 00:29:53.136 "core_mask": "0x2", 00:29:53.136 "workload": "randwrite", 00:29:53.136 "status": "finished", 00:29:53.136 "queue_depth": 128, 00:29:53.136 "io_size": 4096, 00:29:53.136 "runtime": 10.00556, 00:29:53.136 "iops": 23255.56990313386, 00:29:53.136 "mibps": 90.84206993411664, 00:29:53.136 "io_failed": 0, 00:29:53.136 "io_timeout": 0, 00:29:53.136 "avg_latency_us": 5501.196092506014, 00:29:53.136 "min_latency_us": 3120.7619047619046, 00:29:53.136 "max_latency_us": 25090.925714285713 00:29:53.136 } 00:29:53.136 ], 00:29:53.136 "core_count": 1 00:29:53.136 } 00:29:53.136 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2117954 00:29:53.136 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2117954 ']' 00:29:53.136 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2117954 00:29:53.136 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:53.136 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.136 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117954 00:29:53.136 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:53.136 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:53.136 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117954' 00:29:53.136 killing process with pid 2117954 00:29:53.136 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2117954 00:29:53.136 Received shutdown signal, test time was about 10.000000 seconds 00:29:53.136 00:29:53.136 Latency(us) 00:29:53.136 [2024-11-20T15:31:24.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.136 [2024-11-20T15:31:24.370Z] =================================================================================================================== 00:29:53.136 [2024-11-20T15:31:24.370Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:53.136 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2117954 00:29:53.136 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.396 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:53.655 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:53.655 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:53.655 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:53.655 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:53.655 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2114878 00:29:53.655 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2114878 00:29:53.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2114878 Killed "${NVMF_APP[@]}" "$@" 00:29:53.655 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:53.656 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:53.656 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.656 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.656 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:53.656 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2119803 00:29:53.656 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:53.656 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2119803 00:29:53.656 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2119803 ']' 00:29:53.656 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.656 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.656 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.656 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.656 16:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:53.915 [2024-11-20 16:31:24.901463] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:53.915 [2024-11-20 16:31:24.902346] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:29:53.915 [2024-11-20 16:31:24.902380] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.915 [2024-11-20 16:31:24.981765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.915 [2024-11-20 16:31:25.022034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.915 [2024-11-20 16:31:25.022066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.915 [2024-11-20 16:31:25.022073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.915 [2024-11-20 16:31:25.022079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.915 [2024-11-20 16:31:25.022084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.915 [2024-11-20 16:31:25.022630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.915 [2024-11-20 16:31:25.090820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:53.915 [2024-11-20 16:31:25.091050] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:53.915 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.915 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:53.915 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.915 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:53.915 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:53.915 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.915 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:54.175 [2024-11-20 16:31:25.320037] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:54.175 [2024-11-20 16:31:25.320243] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:54.175 [2024-11-20 16:31:25.320327] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:54.175 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:54.175 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4f4e3428-657a-443b-9e50-61757a8b1e29 00:29:54.175 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=4f4e3428-657a-443b-9e50-61757a8b1e29 00:29:54.175 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:54.175 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:54.175 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:54.175 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:54.175 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:54.434 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4f4e3428-657a-443b-9e50-61757a8b1e29 -t 2000 00:29:54.693 [ 00:29:54.693 { 00:29:54.693 "name": "4f4e3428-657a-443b-9e50-61757a8b1e29", 00:29:54.693 "aliases": [ 00:29:54.693 "lvs/lvol" 00:29:54.693 ], 00:29:54.693 "product_name": "Logical Volume", 00:29:54.693 "block_size": 4096, 00:29:54.693 "num_blocks": 38912, 00:29:54.693 "uuid": "4f4e3428-657a-443b-9e50-61757a8b1e29", 00:29:54.693 "assigned_rate_limits": { 00:29:54.693 "rw_ios_per_sec": 0, 00:29:54.693 "rw_mbytes_per_sec": 0, 00:29:54.693 "r_mbytes_per_sec": 0, 00:29:54.693 "w_mbytes_per_sec": 0 00:29:54.693 }, 00:29:54.693 "claimed": false, 00:29:54.693 "zoned": false, 00:29:54.693 "supported_io_types": { 00:29:54.694 "read": true, 00:29:54.694 "write": true, 00:29:54.694 "unmap": true, 00:29:54.694 "flush": false, 00:29:54.694 "reset": true, 00:29:54.694 "nvme_admin": false, 00:29:54.694 "nvme_io": false, 00:29:54.694 "nvme_io_md": false, 00:29:54.694 "write_zeroes": true, 00:29:54.694 "zcopy": false, 00:29:54.694 "get_zone_info": false, 00:29:54.694 "zone_management": false, 00:29:54.694 "zone_append": false, 00:29:54.694 "compare": false, 00:29:54.694 "compare_and_write": false, 00:29:54.694 "abort": false, 00:29:54.694 "seek_hole": true, 00:29:54.694 "seek_data": true, 00:29:54.694 "copy": false, 00:29:54.694 "nvme_iov_md": false 00:29:54.694 }, 00:29:54.694 "driver_specific": { 00:29:54.694 "lvol": { 00:29:54.694 "lvol_store_uuid": "9a569220-26b2-43c7-a4fe-a9e5dad21f17", 00:29:54.694 "base_bdev": "aio_bdev", 00:29:54.694 "thin_provision": false, 00:29:54.694 "num_allocated_clusters": 38, 00:29:54.694 "snapshot": false, 00:29:54.694 "clone": false, 00:29:54.694 "esnap_clone": false 00:29:54.694 } 00:29:54.694 } 00:29:54.694 } 00:29:54.694 ] 00:29:54.694 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:54.694 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:54.694 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:54.694 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:54.694 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:54.694 16:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:54.953 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:54.953 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:55.211 [2024-11-20 16:31:26.275102] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:55.211 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:55.211 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:55.211 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:55.211 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.211 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.211 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.211 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.211 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.211 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.211 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.212 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:55.212 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:55.471 request: 00:29:55.471 { 00:29:55.471 "uuid": "9a569220-26b2-43c7-a4fe-a9e5dad21f17", 00:29:55.471 "method": "bdev_lvol_get_lvstores", 00:29:55.471 "req_id": 1 00:29:55.471 } 00:29:55.471 Got JSON-RPC error response 00:29:55.471 response: 00:29:55.471 { 00:29:55.471 "code": -19, 00:29:55.471 "message": "No such device" 00:29:55.471 } 00:29:55.471 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:55.471 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:55.471 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:55.471 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:55.471 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:55.471 aio_bdev 00:29:55.471 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4f4e3428-657a-443b-9e50-61757a8b1e29 00:29:55.471 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=4f4e3428-657a-443b-9e50-61757a8b1e29 00:29:55.471 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:55.471 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:55.471 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:55.471 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:55.471 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:55.730 16:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4f4e3428-657a-443b-9e50-61757a8b1e29 -t 2000 00:29:55.988 [ 00:29:55.988 { 00:29:55.988 "name": "4f4e3428-657a-443b-9e50-61757a8b1e29", 00:29:55.988 "aliases": [ 00:29:55.988 "lvs/lvol" 00:29:55.988 ], 00:29:55.988 "product_name": "Logical Volume", 00:29:55.988 "block_size": 4096, 00:29:55.988 "num_blocks": 38912, 00:29:55.988 "uuid": "4f4e3428-657a-443b-9e50-61757a8b1e29", 00:29:55.988 "assigned_rate_limits": { 00:29:55.988 "rw_ios_per_sec": 0, 00:29:55.988 "rw_mbytes_per_sec": 0, 00:29:55.988 "r_mbytes_per_sec": 0, 00:29:55.988 "w_mbytes_per_sec": 0 00:29:55.988 }, 00:29:55.988 "claimed": false, 00:29:55.988 "zoned": false, 00:29:55.988 "supported_io_types": { 00:29:55.988 "read": true, 00:29:55.988 "write": true, 00:29:55.988 "unmap": true, 00:29:55.988 "flush": false, 00:29:55.988 "reset": true, 00:29:55.988 "nvme_admin": false, 00:29:55.988 "nvme_io": false, 00:29:55.988 "nvme_io_md": false, 00:29:55.988 "write_zeroes": true, 00:29:55.988 "zcopy": false, 00:29:55.988 "get_zone_info": false, 00:29:55.988 "zone_management": false, 00:29:55.988 "zone_append": false, 00:29:55.988 "compare": false, 00:29:55.988 "compare_and_write": false, 00:29:55.988 "abort": false, 00:29:55.988 "seek_hole": true, 00:29:55.988 "seek_data": true, 00:29:55.988 "copy": false, 00:29:55.988 "nvme_iov_md": false 00:29:55.988 }, 00:29:55.988 "driver_specific": { 00:29:55.988 "lvol": { 00:29:55.988 "lvol_store_uuid": "9a569220-26b2-43c7-a4fe-a9e5dad21f17", 00:29:55.988 "base_bdev": "aio_bdev", 00:29:55.988 "thin_provision": false, 00:29:55.988 "num_allocated_clusters": 38, 00:29:55.988 "snapshot": false, 00:29:55.988 "clone": false, 00:29:55.988 "esnap_clone": false 00:29:55.988 } 00:29:55.988 } 00:29:55.988 } 00:29:55.988 ] 00:29:55.988 16:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:55.988 16:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:55.988 16:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:56.247 16:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:56.247 16:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:56.247 16:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:56.247 16:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:56.247 16:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4f4e3428-657a-443b-9e50-61757a8b1e29 00:29:56.506 16:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9a569220-26b2-43c7-a4fe-a9e5dad21f17 00:29:56.765 16:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:57.024 00:29:57.024 real 0m16.764s 00:29:57.024 user 0m34.117s 00:29:57.024 sys 0m3.948s 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:57.024 ************************************ 00:29:57.024 END TEST lvs_grow_dirty 00:29:57.024 ************************************ 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:57.024 nvmf_trace.0 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:57.024 rmmod nvme_tcp 00:29:57.024 rmmod nvme_fabrics 00:29:57.024 rmmod nvme_keyring 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2119803 ']' 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2119803 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2119803 ']' 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2119803 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:57.024 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2119803 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2119803' 00:29:57.283 killing process with pid 2119803 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2119803 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2119803 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.283 16:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.819 00:29:59.819 real 0m41.526s 00:29:59.819 user 0m51.694s 00:29:59.819 sys 0m10.385s 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:59.819 ************************************ 00:29:59.819 END TEST nvmf_lvs_grow 00:29:59.819 ************************************ 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:59.819 ************************************ 00:29:59.819 START TEST nvmf_bdev_io_wait 00:29:59.819 ************************************ 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:59.819 * Looking for test storage... 00:29:59.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.819 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:59.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.819 --rc genhtml_branch_coverage=1 00:29:59.819 --rc genhtml_function_coverage=1 00:29:59.819 --rc genhtml_legend=1 00:29:59.819 --rc geninfo_all_blocks=1 00:29:59.820 --rc geninfo_unexecuted_blocks=1 00:29:59.820 00:29:59.820 ' 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:59.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.820 --rc genhtml_branch_coverage=1 00:29:59.820 --rc genhtml_function_coverage=1 00:29:59.820 --rc genhtml_legend=1 00:29:59.820 --rc geninfo_all_blocks=1 00:29:59.820 --rc geninfo_unexecuted_blocks=1 00:29:59.820 00:29:59.820 ' 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:59.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.820 --rc genhtml_branch_coverage=1 00:29:59.820 --rc genhtml_function_coverage=1 00:29:59.820 --rc genhtml_legend=1 00:29:59.820 --rc geninfo_all_blocks=1 00:29:59.820 --rc geninfo_unexecuted_blocks=1 00:29:59.820 00:29:59.820 ' 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:59.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.820 --rc genhtml_branch_coverage=1 00:29:59.820 --rc genhtml_function_coverage=1 00:29:59.820 --rc genhtml_legend=1 00:29:59.820 --rc geninfo_all_blocks=1 00:29:59.820 --rc geninfo_unexecuted_blocks=1 00:29:59.820 00:29:59.820 ' 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:59.820 16:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.394 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.394 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.394 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.394 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.394 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.394 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.394 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.394 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.394 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.394 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:06.394 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.394 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:06.394 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:06.395 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:06.395 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:06.395 Found net devices under 0000:86:00.0: cvl_0_0 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:06.395 Found net devices under 0000:86:00.1: cvl_0_1 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:30:06.395 00:30:06.395 --- 10.0.0.2 ping statistics --- 00:30:06.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.395 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:30:06.395 00:30:06.395 --- 10.0.0.1 ping statistics --- 00:30:06.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.395 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.395 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2123852 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2123852 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2123852 ']' 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.396 [2024-11-20 16:31:36.768122] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:06.396 [2024-11-20 16:31:36.769054] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:30:06.396 [2024-11-20 16:31:36.769086] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.396 [2024-11-20 16:31:36.850687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.396 [2024-11-20 16:31:36.893528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.396 [2024-11-20 16:31:36.893565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.396 [2024-11-20 16:31:36.893572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.396 [2024-11-20 16:31:36.893577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.396 [2024-11-20 16:31:36.893582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.396 [2024-11-20 16:31:36.895002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.396 [2024-11-20 16:31:36.895042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.396 [2024-11-20 16:31:36.895146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.396 [2024-11-20 16:31:36.895148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.396 [2024-11-20 16:31:36.895538] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.396 16:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.396 [2024-11-20 16:31:37.016143] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:06.396 [2024-11-20 16:31:37.016665] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:06.396 [2024-11-20 16:31:37.016677] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:06.396 [2024-11-20 16:31:37.016837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.396 [2024-11-20 16:31:37.027957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.396 Malloc0 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.396 [2024-11-20 16:31:37.100070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2123874 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2123876 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.396 { 00:30:06.396 "params": { 00:30:06.396 "name": "Nvme$subsystem", 00:30:06.396 "trtype": "$TEST_TRANSPORT", 00:30:06.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.396 "adrfam": "ipv4", 00:30:06.396 "trsvcid": "$NVMF_PORT", 00:30:06.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.396 "hdgst": ${hdgst:-false}, 00:30:06.396 "ddgst": ${ddgst:-false} 00:30:06.396 }, 00:30:06.396 "method": "bdev_nvme_attach_controller" 00:30:06.396 } 00:30:06.396 EOF 00:30:06.396 )") 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2123878 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:06.396 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2123881 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.397 { 00:30:06.397 "params": { 00:30:06.397 "name": "Nvme$subsystem", 00:30:06.397 "trtype": "$TEST_TRANSPORT", 00:30:06.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.397 "adrfam": "ipv4", 00:30:06.397 "trsvcid": "$NVMF_PORT", 00:30:06.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.397 "hdgst": ${hdgst:-false}, 00:30:06.397 "ddgst": ${ddgst:-false} 00:30:06.397 }, 00:30:06.397 "method": "bdev_nvme_attach_controller" 00:30:06.397 } 00:30:06.397 EOF 00:30:06.397 )") 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.397 { 00:30:06.397 "params": { 00:30:06.397 "name": "Nvme$subsystem", 00:30:06.397 "trtype": "$TEST_TRANSPORT", 00:30:06.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.397 "adrfam": "ipv4", 00:30:06.397 "trsvcid": "$NVMF_PORT", 00:30:06.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.397 "hdgst": ${hdgst:-false}, 00:30:06.397 "ddgst": ${ddgst:-false} 00:30:06.397 }, 00:30:06.397 "method": "bdev_nvme_attach_controller" 00:30:06.397 } 00:30:06.397 EOF 00:30:06.397 )") 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.397 { 00:30:06.397 "params": { 00:30:06.397 "name": "Nvme$subsystem", 00:30:06.397 "trtype": "$TEST_TRANSPORT", 00:30:06.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.397 "adrfam": "ipv4", 00:30:06.397 "trsvcid": "$NVMF_PORT", 00:30:06.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.397 "hdgst": ${hdgst:-false}, 00:30:06.397 "ddgst": ${ddgst:-false} 00:30:06.397 }, 00:30:06.397 "method": "bdev_nvme_attach_controller" 00:30:06.397 } 00:30:06.397 EOF 00:30:06.397 )") 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2123874 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.397 "params": { 00:30:06.397 "name": "Nvme1", 00:30:06.397 "trtype": "tcp", 00:30:06.397 "traddr": "10.0.0.2", 00:30:06.397 "adrfam": "ipv4", 00:30:06.397 "trsvcid": "4420", 00:30:06.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.397 "hdgst": false, 00:30:06.397 "ddgst": false 00:30:06.397 }, 00:30:06.397 "method": "bdev_nvme_attach_controller" 00:30:06.397 }' 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.397 "params": { 00:30:06.397 "name": "Nvme1", 00:30:06.397 "trtype": "tcp", 00:30:06.397 "traddr": "10.0.0.2", 00:30:06.397 "adrfam": "ipv4", 00:30:06.397 "trsvcid": "4420", 00:30:06.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.397 "hdgst": false, 00:30:06.397 "ddgst": false 00:30:06.397 }, 00:30:06.397 "method": "bdev_nvme_attach_controller" 00:30:06.397 }' 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.397 "params": { 00:30:06.397 "name": "Nvme1", 00:30:06.397 "trtype": "tcp", 00:30:06.397 "traddr": "10.0.0.2", 00:30:06.397 "adrfam": "ipv4", 00:30:06.397 "trsvcid": "4420", 00:30:06.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.397 "hdgst": false, 00:30:06.397 "ddgst": false 00:30:06.397 }, 00:30:06.397 "method": "bdev_nvme_attach_controller" 00:30:06.397 }' 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:06.397 16:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.397 "params": { 00:30:06.397 "name": "Nvme1", 00:30:06.397 "trtype": "tcp", 00:30:06.397 "traddr": "10.0.0.2", 00:30:06.397 "adrfam": "ipv4", 00:30:06.397 "trsvcid": "4420", 00:30:06.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.397 "hdgst": false, 00:30:06.397 "ddgst": false 00:30:06.397 }, 00:30:06.397 "method": "bdev_nvme_attach_controller" 00:30:06.397 }' 00:30:06.397 [2024-11-20 16:31:37.150716] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:30:06.397 [2024-11-20 16:31:37.150768] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:06.397 [2024-11-20 16:31:37.152226] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:30:06.397 [2024-11-20 16:31:37.152271] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:06.397 [2024-11-20 16:31:37.154812] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:30:06.397 [2024-11-20 16:31:37.154854] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:06.397 [2024-11-20 16:31:37.156186] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:30:06.397 [2024-11-20 16:31:37.156239] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:06.397 [2024-11-20 16:31:37.336714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.397 [2024-11-20 16:31:37.379157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:06.397 [2024-11-20 16:31:37.437280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.397 [2024-11-20 16:31:37.485581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:06.397 [2024-11-20 16:31:37.493616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.397 [2024-11-20 16:31:37.536094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:06.397 [2024-11-20 16:31:37.549801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.397 [2024-11-20 16:31:37.589643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:06.656 Running I/O for 1 seconds... 00:30:06.656 Running I/O for 1 seconds... 00:30:06.656 Running I/O for 1 seconds... 00:30:06.656 Running I/O for 1 seconds... 00:30:07.590 13914.00 IOPS, 54.35 MiB/s 00:30:07.590 Latency(us) 00:30:07.590 [2024-11-20T15:31:38.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.591 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:07.591 Nvme1n1 : 1.01 13981.31 54.61 0.00 0.00 9129.76 3620.08 10548.18 00:30:07.591 [2024-11-20T15:31:38.825Z] =================================================================================================================== 00:30:07.591 [2024-11-20T15:31:38.825Z] Total : 13981.31 54.61 0.00 0.00 9129.76 3620.08 10548.18 00:30:07.591 6684.00 IOPS, 26.11 MiB/s 00:30:07.591 Latency(us) 00:30:07.591 [2024-11-20T15:31:38.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.591 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:07.591 Nvme1n1 : 1.01 6721.15 26.25 0.00 0.00 18880.24 4275.44 27962.03 00:30:07.591 [2024-11-20T15:31:38.825Z] =================================================================================================================== 00:30:07.591 [2024-11-20T15:31:38.825Z] Total : 6721.15 26.25 0.00 0.00 18880.24 4275.44 27962.03 00:30:07.849 247256.00 IOPS, 965.84 MiB/s 00:30:07.849 Latency(us) 00:30:07.849 [2024-11-20T15:31:39.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.849 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:07.849 Nvme1n1 : 1.00 246852.50 964.27 0.00 0.00 515.95 224.30 1630.60 00:30:07.849 [2024-11-20T15:31:39.083Z] =================================================================================================================== 00:30:07.849 [2024-11-20T15:31:39.083Z] Total : 246852.50 964.27 0.00 0.00 515.95 224.30 1630.60 00:30:07.849 16:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2123876 00:30:07.849 7163.00 IOPS, 27.98 MiB/s 00:30:07.849 Latency(us) 00:30:07.849 [2024-11-20T15:31:39.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.849 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:07.849 Nvme1n1 : 1.00 7276.03 28.42 0.00 0.00 17558.23 2559.02 35951.18 00:30:07.849 [2024-11-20T15:31:39.083Z] =================================================================================================================== 00:30:07.849 [2024-11-20T15:31:39.083Z] Total : 7276.03 28.42 0.00 0.00 17558.23 2559.02 35951.18 00:30:07.849 16:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2123878 00:30:07.849 16:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2123881 00:30:07.849 16:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:07.849 16:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.849 16:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:07.849 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.849 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:07.849 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:07.849 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:07.849 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:07.849 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:07.849 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:07.849 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:07.849 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:07.849 rmmod nvme_tcp 00:30:07.849 rmmod nvme_fabrics 00:30:07.849 rmmod nvme_keyring 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2123852 ']' 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2123852 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2123852 ']' 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2123852 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2123852 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2123852' 00:30:08.108 killing process with pid 2123852 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2123852 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2123852 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.108 16:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.644 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:10.645 00:30:10.645 real 0m10.791s 00:30:10.645 user 0m15.368s 00:30:10.645 sys 0m6.480s 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:10.645 ************************************ 00:30:10.645 END TEST nvmf_bdev_io_wait 00:30:10.645 ************************************ 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:10.645 ************************************ 00:30:10.645 START TEST nvmf_queue_depth 00:30:10.645 ************************************ 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:10.645 * Looking for test storage... 00:30:10.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:10.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.645 --rc genhtml_branch_coverage=1 00:30:10.645 --rc genhtml_function_coverage=1 00:30:10.645 --rc genhtml_legend=1 00:30:10.645 --rc geninfo_all_blocks=1 00:30:10.645 --rc geninfo_unexecuted_blocks=1 00:30:10.645 00:30:10.645 ' 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:10.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.645 --rc genhtml_branch_coverage=1 00:30:10.645 --rc genhtml_function_coverage=1 00:30:10.645 --rc genhtml_legend=1 00:30:10.645 --rc geninfo_all_blocks=1 00:30:10.645 --rc geninfo_unexecuted_blocks=1 00:30:10.645 00:30:10.645 ' 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:10.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.645 --rc genhtml_branch_coverage=1 00:30:10.645 --rc genhtml_function_coverage=1 00:30:10.645 --rc genhtml_legend=1 00:30:10.645 --rc geninfo_all_blocks=1 00:30:10.645 --rc geninfo_unexecuted_blocks=1 00:30:10.645 00:30:10.645 ' 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:10.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.645 --rc genhtml_branch_coverage=1 00:30:10.645 --rc genhtml_function_coverage=1 00:30:10.645 --rc genhtml_legend=1 00:30:10.645 --rc geninfo_all_blocks=1 00:30:10.645 --rc geninfo_unexecuted_blocks=1 00:30:10.645 00:30:10.645 ' 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.645 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:10.646 16:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.217 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.217 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:17.217 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:17.218 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:17.218 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:17.218 Found net devices under 0000:86:00.0: cvl_0_0 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:17.218 Found net devices under 0000:86:00.1: cvl_0_1 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:17.218 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:17.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:30:17.218 00:30:17.219 --- 10.0.0.2 ping statistics --- 00:30:17.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.219 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:30:17.219 00:30:17.219 --- 10.0.0.1 ping statistics --- 00:30:17.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.219 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2127698 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2127698 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2127698 ']' 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.219 [2024-11-20 16:31:47.579239] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:17.219 [2024-11-20 16:31:47.580172] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:30:17.219 [2024-11-20 16:31:47.580213] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.219 [2024-11-20 16:31:47.661757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.219 [2024-11-20 16:31:47.700142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.219 [2024-11-20 16:31:47.700176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.219 [2024-11-20 16:31:47.700183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.219 [2024-11-20 16:31:47.700189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.219 [2024-11-20 16:31:47.700194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.219 [2024-11-20 16:31:47.700752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.219 [2024-11-20 16:31:47.767118] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:17.219 [2024-11-20 16:31:47.767361] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.219 [2024-11-20 16:31:47.841482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.219 Malloc0 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.219 [2024-11-20 16:31:47.921382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2127896 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2127896 /var/tmp/bdevperf.sock 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2127896 ']' 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:17.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.219 16:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.219 [2024-11-20 16:31:47.971985] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:30:17.219 [2024-11-20 16:31:47.972029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127896 ] 00:30:17.219 [2024-11-20 16:31:48.045715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.219 [2024-11-20 16:31:48.088032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.219 16:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.219 16:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:17.219 16:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:17.219 16:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.219 16:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.219 NVMe0n1 00:30:17.219 16:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.219 16:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:17.220 Running I/O for 10 seconds... 00:30:19.525 12079.00 IOPS, 47.18 MiB/s [2024-11-20T15:31:51.694Z] 12209.00 IOPS, 47.69 MiB/s [2024-11-20T15:31:52.627Z] 12290.00 IOPS, 48.01 MiB/s [2024-11-20T15:31:53.562Z] 12329.75 IOPS, 48.16 MiB/s [2024-11-20T15:31:54.496Z] 12458.20 IOPS, 48.66 MiB/s [2024-11-20T15:31:55.430Z] 12451.00 IOPS, 48.64 MiB/s [2024-11-20T15:31:56.803Z] 12477.00 IOPS, 48.74 MiB/s [2024-11-20T15:31:57.736Z] 12537.25 IOPS, 48.97 MiB/s [2024-11-20T15:31:58.671Z] 12538.67 IOPS, 48.98 MiB/s [2024-11-20T15:31:58.671Z] 12582.20 IOPS, 49.15 MiB/s 00:30:27.437 Latency(us) 00:30:27.437 [2024-11-20T15:31:58.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.437 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:27.437 Verification LBA range: start 0x0 length 0x4000 00:30:27.437 NVMe0n1 : 10.06 12594.18 49.20 0.00 0.00 81026.28 18849.40 53177.78 00:30:27.437 [2024-11-20T15:31:58.671Z] =================================================================================================================== 00:30:27.437 [2024-11-20T15:31:58.671Z] Total : 12594.18 49.20 0.00 0.00 81026.28 18849.40 53177.78 00:30:27.437 { 00:30:27.437 "results": [ 00:30:27.437 { 00:30:27.437 "job": "NVMe0n1", 00:30:27.437 "core_mask": "0x1", 00:30:27.437 "workload": "verify", 00:30:27.437 "status": "finished", 00:30:27.437 "verify_range": { 00:30:27.437 "start": 0, 00:30:27.437 "length": 16384 00:30:27.437 }, 00:30:27.437 "queue_depth": 1024, 00:30:27.437 "io_size": 4096, 00:30:27.437 "runtime": 10.062984, 00:30:27.437 "iops": 12594.176836612281, 00:30:27.437 "mibps": 49.196003268016725, 00:30:27.437 "io_failed": 0, 00:30:27.437 "io_timeout": 0, 00:30:27.437 "avg_latency_us": 81026.27815842205, 00:30:27.437 "min_latency_us": 18849.401904761904, 00:30:27.437 "max_latency_us": 53177.782857142854 00:30:27.437 } 00:30:27.437 ], 00:30:27.437 "core_count": 1 00:30:27.437 } 00:30:27.437 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2127896 00:30:27.437 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2127896 ']' 00:30:27.437 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2127896 00:30:27.437 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:27.437 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.437 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2127896 00:30:27.437 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:27.437 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:27.437 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2127896' 00:30:27.437 killing process with pid 2127896 00:30:27.437 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2127896 00:30:27.437 Received shutdown signal, test time was about 10.000000 seconds 00:30:27.437 00:30:27.437 Latency(us) 00:30:27.437 [2024-11-20T15:31:58.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.437 [2024-11-20T15:31:58.671Z] =================================================================================================================== 00:30:27.437 [2024-11-20T15:31:58.671Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:27.437 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2127896 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:27.696 rmmod nvme_tcp 00:30:27.696 rmmod nvme_fabrics 00:30:27.696 rmmod nvme_keyring 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2127698 ']' 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2127698 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2127698 ']' 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2127698 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2127698 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2127698' 00:30:27.696 killing process with pid 2127698 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2127698 00:30:27.696 16:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2127698 00:30:27.956 16:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:27.956 16:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:27.956 16:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:27.956 16:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:27.956 16:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:27.956 16:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:27.956 16:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:27.956 16:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:27.956 16:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:27.956 16:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.956 16:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.956 16:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.858 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:29.858 00:30:29.858 real 0m19.648s 00:30:29.858 user 0m22.669s 00:30:29.858 sys 0m6.263s 00:30:29.858 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.858 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:29.858 ************************************ 00:30:29.858 END TEST nvmf_queue_depth 00:30:29.858 ************************************ 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:30.116 ************************************ 00:30:30.116 START TEST nvmf_target_multipath 00:30:30.116 ************************************ 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:30.116 * Looking for test storage... 00:30:30.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:30.116 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:30.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.117 --rc genhtml_branch_coverage=1 00:30:30.117 --rc genhtml_function_coverage=1 00:30:30.117 --rc genhtml_legend=1 00:30:30.117 --rc geninfo_all_blocks=1 00:30:30.117 --rc geninfo_unexecuted_blocks=1 00:30:30.117 00:30:30.117 ' 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:30.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.117 --rc genhtml_branch_coverage=1 00:30:30.117 --rc genhtml_function_coverage=1 00:30:30.117 --rc genhtml_legend=1 00:30:30.117 --rc geninfo_all_blocks=1 00:30:30.117 --rc geninfo_unexecuted_blocks=1 00:30:30.117 00:30:30.117 ' 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:30.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.117 --rc genhtml_branch_coverage=1 00:30:30.117 --rc genhtml_function_coverage=1 00:30:30.117 --rc genhtml_legend=1 00:30:30.117 --rc geninfo_all_blocks=1 00:30:30.117 --rc geninfo_unexecuted_blocks=1 00:30:30.117 00:30:30.117 ' 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:30.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.117 --rc genhtml_branch_coverage=1 00:30:30.117 --rc genhtml_function_coverage=1 00:30:30.117 --rc genhtml_legend=1 00:30:30.117 --rc geninfo_all_blocks=1 00:30:30.117 --rc geninfo_unexecuted_blocks=1 00:30:30.117 00:30:30.117 ' 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.117 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.376 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:30.376 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:30.376 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.376 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.376 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.376 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.376 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.376 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:30.376 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.376 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.376 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.376 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.376 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:30.377 16:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:36.945 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:36.945 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.945 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:36.946 Found net devices under 0000:86:00.0: cvl_0_0 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:36.946 Found net devices under 0000:86:00.1: cvl_0_1 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.946 16:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:36.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:30:36.946 00:30:36.946 --- 10.0.0.2 ping statistics --- 00:30:36.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.946 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:30:36.946 00:30:36.946 --- 10.0.0.1 ping statistics --- 00:30:36.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.946 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:36.946 only one NIC for nvmf test 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:36.946 rmmod nvme_tcp 00:30:36.946 rmmod nvme_fabrics 00:30:36.946 rmmod nvme_keyring 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.946 16:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.322 00:30:38.322 real 0m8.272s 00:30:38.322 user 0m1.828s 00:30:38.322 sys 0m4.460s 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:38.322 ************************************ 00:30:38.322 END TEST nvmf_target_multipath 00:30:38.322 ************************************ 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:38.322 ************************************ 00:30:38.322 START TEST nvmf_zcopy 00:30:38.322 ************************************ 00:30:38.322 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:38.582 * Looking for test storage... 00:30:38.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:38.582 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:38.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.583 --rc genhtml_branch_coverage=1 00:30:38.583 --rc genhtml_function_coverage=1 00:30:38.583 --rc genhtml_legend=1 00:30:38.583 --rc geninfo_all_blocks=1 00:30:38.583 --rc geninfo_unexecuted_blocks=1 00:30:38.583 00:30:38.583 ' 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:38.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.583 --rc genhtml_branch_coverage=1 00:30:38.583 --rc genhtml_function_coverage=1 00:30:38.583 --rc genhtml_legend=1 00:30:38.583 --rc geninfo_all_blocks=1 00:30:38.583 --rc geninfo_unexecuted_blocks=1 00:30:38.583 00:30:38.583 ' 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:38.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.583 --rc genhtml_branch_coverage=1 00:30:38.583 --rc genhtml_function_coverage=1 00:30:38.583 --rc genhtml_legend=1 00:30:38.583 --rc geninfo_all_blocks=1 00:30:38.583 --rc geninfo_unexecuted_blocks=1 00:30:38.583 00:30:38.583 ' 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:38.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.583 --rc genhtml_branch_coverage=1 00:30:38.583 --rc genhtml_function_coverage=1 00:30:38.583 --rc genhtml_legend=1 00:30:38.583 --rc geninfo_all_blocks=1 00:30:38.583 --rc geninfo_unexecuted_blocks=1 00:30:38.583 00:30:38.583 ' 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.583 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.584 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.584 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.584 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.584 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:38.584 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:38.584 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.584 16:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.151 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.151 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:45.152 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:45.152 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:45.152 Found net devices under 0000:86:00.0: cvl_0_0 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:45.152 Found net devices under 0000:86:00.1: cvl_0_1 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.152 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:30:45.152 00:30:45.152 --- 10.0.0.2 ping statistics --- 00:30:45.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.152 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:30:45.153 00:30:45.153 --- 10.0.0.1 ping statistics --- 00:30:45.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.153 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2136541 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2136541 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2136541 ']' 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.153 [2024-11-20 16:32:15.675434] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.153 [2024-11-20 16:32:15.676352] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:30:45.153 [2024-11-20 16:32:15.676385] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.153 [2024-11-20 16:32:15.756441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.153 [2024-11-20 16:32:15.796313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.153 [2024-11-20 16:32:15.796348] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.153 [2024-11-20 16:32:15.796355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.153 [2024-11-20 16:32:15.796361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.153 [2024-11-20 16:32:15.796366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.153 [2024-11-20 16:32:15.796899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.153 [2024-11-20 16:32:15.862935] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.153 [2024-11-20 16:32:15.863137] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.153 [2024-11-20 16:32:15.929621] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.153 [2024-11-20 16:32:15.953819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.153 malloc0 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:45.153 16:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:45.153 16:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:45.153 16:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:45.153 { 00:30:45.153 "params": { 00:30:45.153 "name": "Nvme$subsystem", 00:30:45.153 "trtype": "$TEST_TRANSPORT", 00:30:45.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.153 "adrfam": "ipv4", 00:30:45.153 "trsvcid": "$NVMF_PORT", 00:30:45.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.153 "hdgst": ${hdgst:-false}, 00:30:45.153 "ddgst": ${ddgst:-false} 00:30:45.153 }, 00:30:45.153 "method": "bdev_nvme_attach_controller" 00:30:45.153 } 00:30:45.153 EOF 00:30:45.153 )") 00:30:45.153 16:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:45.153 16:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:45.153 16:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:45.153 16:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:45.153 "params": { 00:30:45.153 "name": "Nvme1", 00:30:45.153 "trtype": "tcp", 00:30:45.153 "traddr": "10.0.0.2", 00:30:45.153 "adrfam": "ipv4", 00:30:45.153 "trsvcid": "4420", 00:30:45.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:45.153 "hdgst": false, 00:30:45.153 "ddgst": false 00:30:45.153 }, 00:30:45.153 "method": "bdev_nvme_attach_controller" 00:30:45.153 }' 00:30:45.153 [2024-11-20 16:32:16.045659] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:30:45.153 [2024-11-20 16:32:16.045699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2136569 ] 00:30:45.153 [2024-11-20 16:32:16.119472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.153 [2024-11-20 16:32:16.159937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.153 Running I/O for 10 seconds... 00:30:47.463 8491.00 IOPS, 66.34 MiB/s [2024-11-20T15:32:19.681Z] 8551.00 IOPS, 66.80 MiB/s [2024-11-20T15:32:20.663Z] 8571.67 IOPS, 66.97 MiB/s [2024-11-20T15:32:21.598Z] 8549.25 IOPS, 66.79 MiB/s [2024-11-20T15:32:22.533Z] 8536.80 IOPS, 66.69 MiB/s [2024-11-20T15:32:23.468Z] 8548.50 IOPS, 66.79 MiB/s [2024-11-20T15:32:24.402Z] 8558.43 IOPS, 66.86 MiB/s [2024-11-20T15:32:25.777Z] 8566.62 IOPS, 66.93 MiB/s [2024-11-20T15:32:26.712Z] 8577.00 IOPS, 67.01 MiB/s [2024-11-20T15:32:26.712Z] 8580.10 IOPS, 67.03 MiB/s 00:30:55.478 Latency(us) 00:30:55.478 [2024-11-20T15:32:26.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.478 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:55.478 Verification LBA range: start 0x0 length 0x1000 00:30:55.478 Nvme1n1 : 10.05 8549.68 66.79 0.00 0.00 14873.31 2512.21 43690.67 00:30:55.478 [2024-11-20T15:32:26.712Z] =================================================================================================================== 00:30:55.478 [2024-11-20T15:32:26.712Z] Total : 8549.68 66.79 0.00 0.00 14873.31 2512.21 43690.67 00:30:55.478 16:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2138178 00:30:55.478 16:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:55.478 16:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:55.478 16:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:55.478 16:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:55.478 16:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:55.478 16:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:55.478 16:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:55.478 16:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:55.478 { 00:30:55.478 "params": { 00:30:55.478 "name": "Nvme$subsystem", 00:30:55.478 "trtype": "$TEST_TRANSPORT", 00:30:55.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.478 "adrfam": "ipv4", 00:30:55.478 "trsvcid": "$NVMF_PORT", 00:30:55.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.478 "hdgst": ${hdgst:-false}, 00:30:55.478 "ddgst": ${ddgst:-false} 00:30:55.478 }, 00:30:55.478 "method": "bdev_nvme_attach_controller" 00:30:55.478 } 00:30:55.478 EOF 00:30:55.478 )") 00:30:55.478 16:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:55.478 [2024-11-20 16:32:26.593230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.478 [2024-11-20 16:32:26.593265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.478 16:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:55.478 16:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:55.478 16:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:55.478 "params": { 00:30:55.478 "name": "Nvme1", 00:30:55.478 "trtype": "tcp", 00:30:55.478 "traddr": "10.0.0.2", 00:30:55.478 "adrfam": "ipv4", 00:30:55.478 "trsvcid": "4420", 00:30:55.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:55.478 "hdgst": false, 00:30:55.478 "ddgst": false 00:30:55.478 }, 00:30:55.478 "method": "bdev_nvme_attach_controller" 00:30:55.478 }' 00:30:55.478 [2024-11-20 16:32:26.605194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.478 [2024-11-20 16:32:26.605219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.478 [2024-11-20 16:32:26.617188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.478 [2024-11-20 16:32:26.617197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.478 [2024-11-20 16:32:26.629190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.478 [2024-11-20 16:32:26.629199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.478 [2024-11-20 16:32:26.633571] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:30:55.478 [2024-11-20 16:32:26.633611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138178 ] 00:30:55.478 [2024-11-20 16:32:26.641191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.478 [2024-11-20 16:32:26.641205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.478 [2024-11-20 16:32:26.653186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.478 [2024-11-20 16:32:26.653195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.478 [2024-11-20 16:32:26.665191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.478 [2024-11-20 16:32:26.665205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.478 [2024-11-20 16:32:26.677188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.478 [2024-11-20 16:32:26.677196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.478 [2024-11-20 16:32:26.689190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.478 [2024-11-20 16:32:26.689198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.478 [2024-11-20 16:32:26.701189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.478 [2024-11-20 16:32:26.701198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.708868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.737 [2024-11-20 16:32:26.713190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.713200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.725190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.725206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.737189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.737198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.749189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.749207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.749967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.737 [2024-11-20 16:32:26.761199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.761220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.773196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.773219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.785194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.785224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.797194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.797213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.809196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.809232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.821198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.821221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.833252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.833269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.845196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.845216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.857196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.857214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.869193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.869212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.881190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.881199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.893187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.893196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.905192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.905219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.737 [2024-11-20 16:32:26.917194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.737 [2024-11-20 16:32:26.917215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:26.970376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:26.970396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:26.981190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:26.981209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 Running I/O for 5 seconds... 00:30:55.996 [2024-11-20 16:32:26.996935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:26.996955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.011357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.011377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.026241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.026261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.041075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.041095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.053131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.053150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.067080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.067099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.082168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.082186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.097185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.097211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.110776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.110795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.125605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.125624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.141007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.141026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.154752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.154770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.169828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.169846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.184816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.184835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.198944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.198962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.213497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.213515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.996 [2024-11-20 16:32:27.224743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.996 [2024-11-20 16:32:27.224761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.239222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.239240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.253639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.253657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.264029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.264046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.278891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.278909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.293718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.293737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.306055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.306073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.318480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.318498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.333158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.333176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.344434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.344452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.358831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.358849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.373820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.373838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.385190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.385213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.399094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.399112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.413250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.413268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.424268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.424286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.439004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.439021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.453710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.453727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.464059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.464077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.255 [2024-11-20 16:32:27.478649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.255 [2024-11-20 16:32:27.478666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.493375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.493393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.504513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.504531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.519089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.519107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.533746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.533763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.550049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.550067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.564886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.564904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.579360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.579386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.593959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.593977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.604761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.604780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.619491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.619510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.634328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.634346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.648943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.648961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.663259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.663277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.677654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.677670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.693546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.693562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.709831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.709849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.721259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.721277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.514 [2024-11-20 16:32:27.734799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.514 [2024-11-20 16:32:27.734817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.749922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.749939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.765171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.765189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.779193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.779215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.793690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.793707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.809057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.809075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.822125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.822142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.837386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.837405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.849195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.849224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.863143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.863161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.877451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.877468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.888835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.888853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.903018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.903036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.917780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.917797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.932505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.932523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.946860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.946878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.961316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.961335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.973937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.973954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 [2024-11-20 16:32:27.986871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.986890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.773 16774.00 IOPS, 131.05 MiB/s [2024-11-20T15:32:28.007Z] [2024-11-20 16:32:27.997001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.773 [2024-11-20 16:32:27.997022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.011095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.011114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.025697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.025715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.041014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.041032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.053903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.053920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.066134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.066151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.077517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.077534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.090929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.090946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.105728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.105750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.117866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.117884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.130308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.130326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.141652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.141669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.155488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.155507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.170357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.170375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.184957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.184976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.199115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.199134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.213856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.213874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.224800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.224819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.238606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.238624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.031 [2024-11-20 16:32:28.253460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.031 [2024-11-20 16:32:28.253479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.264089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.289 [2024-11-20 16:32:28.264107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.278815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.289 [2024-11-20 16:32:28.278833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.293365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.289 [2024-11-20 16:32:28.293383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.305916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.289 [2024-11-20 16:32:28.305934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.319196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.289 [2024-11-20 16:32:28.319220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.333742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.289 [2024-11-20 16:32:28.333759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.344322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.289 [2024-11-20 16:32:28.344340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.358885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.289 [2024-11-20 16:32:28.358903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.373615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.289 [2024-11-20 16:32:28.373633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.383873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.289 [2024-11-20 16:32:28.383891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.398968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.289 [2024-11-20 16:32:28.398987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.413498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.289 [2024-11-20 16:32:28.413516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.428888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.289 [2024-11-20 16:32:28.428906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.289 [2024-11-20 16:32:28.442194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.290 [2024-11-20 16:32:28.442219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.290 [2024-11-20 16:32:28.453230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.290 [2024-11-20 16:32:28.453248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.290 [2024-11-20 16:32:28.466881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.290 [2024-11-20 16:32:28.466900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.290 [2024-11-20 16:32:28.481229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.290 [2024-11-20 16:32:28.481247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.290 [2024-11-20 16:32:28.494919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.290 [2024-11-20 16:32:28.494937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.290 [2024-11-20 16:32:28.509632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.290 [2024-11-20 16:32:28.509650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.520978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.520996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.535157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.535176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.549710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.549728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.564841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.564859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.579157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.579175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.594360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.594381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.609070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.609088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.621701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.621718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.635210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.635228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.650338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.650356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.665517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.665534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.677775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.677792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.690960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.690978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.705798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.705816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.721018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.721035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.735144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.735162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.749392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.749419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.760082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.760100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.548 [2024-11-20 16:32:28.775044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.548 [2024-11-20 16:32:28.775062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.789839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.789856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.804945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.804963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.818008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.818025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.830588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.830605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.840831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.840848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.855258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.855277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.870149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.870167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.885054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.885072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.898109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.898127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.910608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.910627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.921117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.921136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.928134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.928152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.942292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.942310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.957576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.957594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.972899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.972917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:28.987057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:28.987075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 16779.00 IOPS, 131.09 MiB/s [2024-11-20T15:32:29.041Z] [2024-11-20 16:32:29.001977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:29.001994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:29.016751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:29.016770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.807 [2024-11-20 16:32:29.030692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.807 [2024-11-20 16:32:29.030712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.065 [2024-11-20 16:32:29.045461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.065 [2024-11-20 16:32:29.045479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.065 [2024-11-20 16:32:29.055745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.065 [2024-11-20 16:32:29.055762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.065 [2024-11-20 16:32:29.070461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.065 [2024-11-20 16:32:29.070480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.065 [2024-11-20 16:32:29.084989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.065 [2024-11-20 16:32:29.085007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.065 [2024-11-20 16:32:29.099277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.065 [2024-11-20 16:32:29.099295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.065 [2024-11-20 16:32:29.113760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.065 [2024-11-20 16:32:29.113777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.065 [2024-11-20 16:32:29.128970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.065 [2024-11-20 16:32:29.128992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.065 [2024-11-20 16:32:29.142075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.065 [2024-11-20 16:32:29.142093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.065 [2024-11-20 16:32:29.154719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.066 [2024-11-20 16:32:29.154737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.066 [2024-11-20 16:32:29.163659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.066 [2024-11-20 16:32:29.163676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.066 [2024-11-20 16:32:29.178533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.066 [2024-11-20 16:32:29.178551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.066 [2024-11-20 16:32:29.193263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.066 [2024-11-20 16:32:29.193280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.066 [2024-11-20 16:32:29.206109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.066 [2024-11-20 16:32:29.206126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.066 [2024-11-20 16:32:29.217033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.066 [2024-11-20 16:32:29.217051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.066 [2024-11-20 16:32:29.231389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.066 [2024-11-20 16:32:29.231406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.066 [2024-11-20 16:32:29.246467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.066 [2024-11-20 16:32:29.246486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.066 [2024-11-20 16:32:29.261200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.066 [2024-11-20 16:32:29.261222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.066 [2024-11-20 16:32:29.271880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.066 [2024-11-20 16:32:29.271897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.066 [2024-11-20 16:32:29.286713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.066 [2024-11-20 16:32:29.286731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.301185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.301209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.314931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.314949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.329606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.329622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.345040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.345057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.359171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.359189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.373686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.373704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.388906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.388928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.403112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.403129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.417980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.417998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.432879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.432897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.447014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.447032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.461812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.461829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.476693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.476711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.491538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.491556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.506546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.506563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.520902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.520921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.533762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.533780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.324 [2024-11-20 16:32:29.546974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.324 [2024-11-20 16:32:29.546992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.561667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.561685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.574384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.574402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.584501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.584519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.599312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.599329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.613681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.613699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.625086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.625103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.638750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.638768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.653793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.653817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.668732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.668752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.682813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.682833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.697844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.697864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.713472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.713492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.726126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.726144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.741040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.741059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.755403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.755421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.769802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.769819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.779899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.779917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.794308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.794327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.583 [2024-11-20 16:32:29.804561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.583 [2024-11-20 16:32:29.804579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.841 [2024-11-20 16:32:29.818916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.841 [2024-11-20 16:32:29.818934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.841 [2024-11-20 16:32:29.833903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.841 [2024-11-20 16:32:29.833921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.841 [2024-11-20 16:32:29.849142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.841 [2024-11-20 16:32:29.849161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.841 [2024-11-20 16:32:29.860422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.841 [2024-11-20 16:32:29.860440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 [2024-11-20 16:32:29.875110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:29.875129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 [2024-11-20 16:32:29.889902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:29.889921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 [2024-11-20 16:32:29.905794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:29.905814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 [2024-11-20 16:32:29.918158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:29.918180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 [2024-11-20 16:32:29.929480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:29.929498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 [2024-11-20 16:32:29.943136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:29.943155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 [2024-11-20 16:32:29.958275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:29.958294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 [2024-11-20 16:32:29.973327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:29.973345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 [2024-11-20 16:32:29.986056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:29.986074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 16801.67 IOPS, 131.26 MiB/s [2024-11-20T15:32:30.076Z] [2024-11-20 16:32:29.998696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:29.998714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 [2024-11-20 16:32:30.015775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:30.015795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 [2024-11-20 16:32:30.030715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:30.030734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 [2024-11-20 16:32:30.046317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:30.046338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.842 [2024-11-20 16:32:30.061664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.842 [2024-11-20 16:32:30.061683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.100 [2024-11-20 16:32:30.074175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.100 [2024-11-20 16:32:30.074195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.100 [2024-11-20 16:32:30.089957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.100 [2024-11-20 16:32:30.089975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.100 [2024-11-20 16:32:30.104993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.100 [2024-11-20 16:32:30.105012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.100 [2024-11-20 16:32:30.116404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.100 [2024-11-20 16:32:30.116423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.131442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.131461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.146309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.146328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.161254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.161273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.173742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.173759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.187188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.187212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.202352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.202370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.217109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.217130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.229265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.229283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.243447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.243465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.258253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.258270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.269238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.269256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.283406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.283424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.298705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.298723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.313594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.313612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.101 [2024-11-20 16:32:30.324554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.101 [2024-11-20 16:32:30.324572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.339307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.339325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.354131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.354149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.369707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.369724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.385227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.385246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.398101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.398119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.411023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.411040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.425948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.425965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.441360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.441379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.454953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.454970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.469968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.469986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.480632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.480649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.495366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.495384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.510145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.510163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.525040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.525058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.539398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.539416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.554469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.554486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.569225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.569243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.359 [2024-11-20 16:32:30.582008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.359 [2024-11-20 16:32:30.582026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.594698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.594716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.609778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.609796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.620367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.620384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.634862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.634880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.649898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.649916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.665410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.665429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.678173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.678190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.690866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.690884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.700709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.700732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.714769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.714787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.729607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.729624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.741098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.741116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.754798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.754816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.769361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.769379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.782457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.782475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.797036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.797054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.808638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.808656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.822815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.822833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.618 [2024-11-20 16:32:30.837796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.618 [2024-11-20 16:32:30.837814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:30.853843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:30.853861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:30.870235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:30.870254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:30.886000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:30.886018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:30.897457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:30.897477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:30.911132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:30.911150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:30.925559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:30.925576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:30.938668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:30.938686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:30.948865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:30.948883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:30.962924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:30.962947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:30.977875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:30.977893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:30.992828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:30.992845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 16734.75 IOPS, 130.74 MiB/s [2024-11-20T15:32:31.111Z] [2024-11-20 16:32:31.006797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:31.006815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:31.021741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:31.021759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:31.034076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:31.034093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:31.046695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:31.046713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:31.056704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:31.056721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:31.071419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:31.071437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:31.085887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:31.085906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.877 [2024-11-20 16:32:31.101418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.877 [2024-11-20 16:32:31.101437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.112854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.112872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.126938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.126958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.141150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.141169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.154909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.154928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.170016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.170035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.181472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.181490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.195544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.195563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.210503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.210521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.225358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.225380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.235930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.235949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.250904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.250923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.265826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.265844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.280907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.280925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.294143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.294161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.306860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.306878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.321559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.321576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.336997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.337016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.349894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.349911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.136 [2024-11-20 16:32:31.362649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.136 [2024-11-20 16:32:31.362667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.377631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.377649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.390015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.390033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.402722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.402740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.417416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.417435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.428576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.428594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.443240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.443258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.457336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.457356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.470030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.470048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.485154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.485179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.498888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.498907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.513841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.513859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.529078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.529097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.541423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.541440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.557267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.557285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.569355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.569373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.583522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.583540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.598210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.598231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.395 [2024-11-20 16:32:31.612744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.395 [2024-11-20 16:32:31.612762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.626616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.626634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.637690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.637706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.650386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.650403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.660442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.660459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.674572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.674590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.685797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.685814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.699333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.699351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.714303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.714321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.729005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.729022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.740290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.740307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.755349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.755368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.770070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.770087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.785158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.785176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.798644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.798662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.814044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.814062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.829360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.829377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.841973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.841990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.854374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.854391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.869223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.869258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.654 [2024-11-20 16:32:31.883094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.654 [2024-11-20 16:32:31.883112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.913 [2024-11-20 16:32:31.897791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.913 [2024-11-20 16:32:31.897809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.913 [2024-11-20 16:32:31.913366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.913 [2024-11-20 16:32:31.913384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.913 [2024-11-20 16:32:31.925491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.913 [2024-11-20 16:32:31.925508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.913 [2024-11-20 16:32:31.939484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.913 [2024-11-20 16:32:31.939502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.913 [2024-11-20 16:32:31.953899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.913 [2024-11-20 16:32:31.953916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.913 [2024-11-20 16:32:31.968624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.913 [2024-11-20 16:32:31.968642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.913 [2024-11-20 16:32:31.982980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.913 [2024-11-20 16:32:31.982998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.913 [2024-11-20 16:32:31.997332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.913 [2024-11-20 16:32:31.997351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.913 16756.80 IOPS, 130.91 MiB/s 00:31:00.913 Latency(us) 00:31:00.913 [2024-11-20T15:32:32.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.913 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:00.913 Nvme1n1 : 5.01 16758.96 130.93 0.00 0.00 7630.88 1966.08 14230.67 00:31:00.913 [2024-11-20T15:32:32.147Z] =================================================================================================================== 00:31:00.913 [2024-11-20T15:32:32.147Z] Total : 16758.96 130.93 0.00 0.00 7630.88 1966.08 14230.67 00:31:00.913 [2024-11-20 16:32:32.005198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.913 [2024-11-20 16:32:32.005225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.914 [2024-11-20 16:32:32.017195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.914 [2024-11-20 16:32:32.017214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.914 [2024-11-20 16:32:32.029195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.914 [2024-11-20 16:32:32.029212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.914 [2024-11-20 16:32:32.041207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.914 [2024-11-20 16:32:32.041228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.914 [2024-11-20 16:32:32.053195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.914 [2024-11-20 16:32:32.053213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.914 [2024-11-20 16:32:32.065198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.914 [2024-11-20 16:32:32.065217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.914 [2024-11-20 16:32:32.077195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.914 [2024-11-20 16:32:32.077217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.914 [2024-11-20 16:32:32.089192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.914 [2024-11-20 16:32:32.089212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.914 [2024-11-20 16:32:32.101193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.914 [2024-11-20 16:32:32.101216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.914 [2024-11-20 16:32:32.113192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.914 [2024-11-20 16:32:32.113211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.914 [2024-11-20 16:32:32.125189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.914 [2024-11-20 16:32:32.125197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.914 [2024-11-20 16:32:32.137194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.914 [2024-11-20 16:32:32.137211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.170 [2024-11-20 16:32:32.149189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.170 [2024-11-20 16:32:32.149199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.170 [2024-11-20 16:32:32.161190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.170 [2024-11-20 16:32:32.161200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2138178) - No such process 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2138178 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:01.170 delay0 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.170 16:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:01.171 [2024-11-20 16:32:32.361320] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:07.732 Initializing NVMe Controllers 00:31:07.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:07.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:07.732 Initialization complete. Launching workers. 00:31:07.732 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 778 00:31:07.732 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1065, failed to submit 33 00:31:07.732 success 926, unsuccessful 139, failed 0 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.732 rmmod nvme_tcp 00:31:07.732 rmmod nvme_fabrics 00:31:07.732 rmmod nvme_keyring 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2136541 ']' 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2136541 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2136541 ']' 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2136541 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2136541 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:07.732 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2136541' 00:31:07.732 killing process with pid 2136541 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2136541 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2136541 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.733 16:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.269 16:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:10.269 00:31:10.269 real 0m31.453s 00:31:10.269 user 0m40.507s 00:31:10.269 sys 0m12.454s 00:31:10.269 16:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:10.269 16:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:10.269 ************************************ 00:31:10.269 END TEST nvmf_zcopy 00:31:10.269 ************************************ 00:31:10.269 16:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:10.269 16:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:10.269 16:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:10.269 16:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:10.269 ************************************ 00:31:10.269 START TEST nvmf_nmic 00:31:10.269 ************************************ 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:10.269 * Looking for test storage... 00:31:10.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:10.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.269 --rc genhtml_branch_coverage=1 00:31:10.269 --rc genhtml_function_coverage=1 00:31:10.269 --rc genhtml_legend=1 00:31:10.269 --rc geninfo_all_blocks=1 00:31:10.269 --rc geninfo_unexecuted_blocks=1 00:31:10.269 00:31:10.269 ' 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:10.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.269 --rc genhtml_branch_coverage=1 00:31:10.269 --rc genhtml_function_coverage=1 00:31:10.269 --rc genhtml_legend=1 00:31:10.269 --rc geninfo_all_blocks=1 00:31:10.269 --rc geninfo_unexecuted_blocks=1 00:31:10.269 00:31:10.269 ' 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:10.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.269 --rc genhtml_branch_coverage=1 00:31:10.269 --rc genhtml_function_coverage=1 00:31:10.269 --rc genhtml_legend=1 00:31:10.269 --rc geninfo_all_blocks=1 00:31:10.269 --rc geninfo_unexecuted_blocks=1 00:31:10.269 00:31:10.269 ' 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:10.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.269 --rc genhtml_branch_coverage=1 00:31:10.269 --rc genhtml_function_coverage=1 00:31:10.269 --rc genhtml_legend=1 00:31:10.269 --rc geninfo_all_blocks=1 00:31:10.269 --rc geninfo_unexecuted_blocks=1 00:31:10.269 00:31:10.269 ' 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.269 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:10.270 16:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.840 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.840 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:16.841 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:16.841 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:16.841 Found net devices under 0000:86:00.0: cvl_0_0 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:16.841 Found net devices under 0000:86:00.1: cvl_0_1 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.841 16:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:16.841 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.841 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.841 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.841 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:16.841 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:16.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:31:16.841 00:31:16.841 --- 10.0.0.2 ping statistics --- 00:31:16.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.841 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:31:16.841 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:31:16.842 00:31:16.842 --- 10.0.0.1 ping statistics --- 00:31:16.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.842 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2143535 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2143535 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2143535 ']' 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:16.842 16:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.842 [2024-11-20 16:32:47.208255] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:16.842 [2024-11-20 16:32:47.209209] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:31:16.842 [2024-11-20 16:32:47.209247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.842 [2024-11-20 16:32:47.291405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.842 [2024-11-20 16:32:47.333567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.842 [2024-11-20 16:32:47.333606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.842 [2024-11-20 16:32:47.333616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.842 [2024-11-20 16:32:47.333621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.842 [2024-11-20 16:32:47.333626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.842 [2024-11-20 16:32:47.335096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.842 [2024-11-20 16:32:47.335226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.842 [2024-11-20 16:32:47.335301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.842 [2024-11-20 16:32:47.335302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.842 [2024-11-20 16:32:47.404378] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:16.842 [2024-11-20 16:32:47.404724] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:16.842 [2024-11-20 16:32:47.405225] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:16.842 [2024-11-20 16:32:47.405630] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:16.842 [2024-11-20 16:32:47.405674] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:16.842 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.842 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:16.842 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:16.842 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:16.842 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.101 [2024-11-20 16:32:48.092126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.101 Malloc0 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.101 [2024-11-20 16:32:48.180315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:17.101 test case1: single bdev can't be used in multiple subsystems 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.101 [2024-11-20 16:32:48.215805] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:17.101 [2024-11-20 16:32:48.215824] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:17.101 [2024-11-20 16:32:48.215832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.101 request: 00:31:17.101 { 00:31:17.101 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:17.101 "namespace": { 00:31:17.101 "bdev_name": "Malloc0", 00:31:17.101 "no_auto_visible": false 00:31:17.101 }, 00:31:17.101 "method": "nvmf_subsystem_add_ns", 00:31:17.101 "req_id": 1 00:31:17.101 } 00:31:17.101 Got JSON-RPC error response 00:31:17.101 response: 00:31:17.101 { 00:31:17.101 "code": -32602, 00:31:17.101 "message": "Invalid parameters" 00:31:17.101 } 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:17.101 Adding namespace failed - expected result. 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:17.101 test case2: host connect to nvmf target in multiple paths 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.101 [2024-11-20 16:32:48.227898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.101 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:17.359 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:17.616 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:17.616 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:17.616 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:17.616 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:17.617 16:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:19.519 16:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:19.519 16:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:19.519 16:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:19.798 16:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:19.798 16:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:19.798 16:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:19.798 16:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:19.798 [global] 00:31:19.798 thread=1 00:31:19.798 invalidate=1 00:31:19.798 rw=write 00:31:19.798 time_based=1 00:31:19.798 runtime=1 00:31:19.798 ioengine=libaio 00:31:19.798 direct=1 00:31:19.798 bs=4096 00:31:19.798 iodepth=1 00:31:19.798 norandommap=0 00:31:19.798 numjobs=1 00:31:19.798 00:31:19.798 verify_dump=1 00:31:19.798 verify_backlog=512 00:31:19.798 verify_state_save=0 00:31:19.798 do_verify=1 00:31:19.798 verify=crc32c-intel 00:31:19.798 [job0] 00:31:19.798 filename=/dev/nvme0n1 00:31:19.798 Could not set queue depth (nvme0n1) 00:31:20.056 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:20.056 fio-3.35 00:31:20.056 Starting 1 thread 00:31:21.428 00:31:21.428 job0: (groupid=0, jobs=1): err= 0: pid=2144373: Wed Nov 20 16:32:52 2024 00:31:21.428 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:31:21.428 slat (nsec): min=10305, max=24704, avg=22261.32, stdev=2824.10 00:31:21.428 clat (usec): min=40835, max=41266, avg=40977.28, stdev=105.36 00:31:21.428 lat (usec): min=40857, max=41276, avg=40999.54, stdev=103.56 00:31:21.428 clat percentiles (usec): 00:31:21.428 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:31:21.428 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:21.428 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:21.428 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:21.428 | 99.99th=[41157] 00:31:21.428 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:31:21.428 slat (usec): min=10, max=28741, avg=68.04, stdev=1269.70 00:31:21.428 clat (usec): min=127, max=331, avg=138.61, stdev=12.40 00:31:21.428 lat (usec): min=138, max=29044, avg=206.65, stdev=1277.00 00:31:21.428 clat percentiles (usec): 00:31:21.428 | 1.00th=[ 131], 5.00th=[ 133], 10.00th=[ 133], 20.00th=[ 135], 00:31:21.428 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 137], 60.00th=[ 139], 00:31:21.428 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 149], 00:31:21.428 | 99.00th=[ 163], 99.50th=[ 172], 99.90th=[ 334], 99.95th=[ 334], 00:31:21.428 | 99.99th=[ 334] 00:31:21.428 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:21.428 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:21.428 lat (usec) : 250=95.51%, 500=0.37% 00:31:21.428 lat (msec) : 50=4.12% 00:31:21.428 cpu : usr=0.30%, sys=0.99%, ctx=538, majf=0, minf=1 00:31:21.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.428 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:21.428 00:31:21.428 Run status group 0 (all jobs): 00:31:21.428 READ: bw=87.1KiB/s (89.2kB/s), 87.1KiB/s-87.1KiB/s (89.2kB/s-89.2kB/s), io=88.0KiB (90.1kB), run=1010-1010msec 00:31:21.428 WRITE: bw=2028KiB/s (2076kB/s), 2028KiB/s-2028KiB/s (2076kB/s-2076kB/s), io=2048KiB (2097kB), run=1010-1010msec 00:31:21.428 00:31:21.428 Disk stats (read/write): 00:31:21.428 nvme0n1: ios=45/512, merge=0/0, ticks=1764/66, in_queue=1830, util=98.60% 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:21.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:21.428 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:21.429 rmmod nvme_tcp 00:31:21.429 rmmod nvme_fabrics 00:31:21.429 rmmod nvme_keyring 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2143535 ']' 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2143535 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2143535 ']' 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2143535 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2143535 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2143535' 00:31:21.429 killing process with pid 2143535 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2143535 00:31:21.429 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2143535 00:31:21.688 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:21.688 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:21.688 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:21.688 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:21.688 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:21.688 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:21.688 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:21.688 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:21.688 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:21.688 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.688 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.688 16:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.593 16:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:23.594 00:31:23.594 real 0m13.776s 00:31:23.594 user 0m24.754s 00:31:23.594 sys 0m6.163s 00:31:23.594 16:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:23.594 16:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:23.594 ************************************ 00:31:23.594 END TEST nvmf_nmic 00:31:23.594 ************************************ 00:31:23.854 16:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:23.854 16:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:23.854 16:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:23.854 16:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:23.854 ************************************ 00:31:23.854 START TEST nvmf_fio_target 00:31:23.854 ************************************ 00:31:23.854 16:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:23.854 * Looking for test storage... 00:31:23.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:23.854 16:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:23.854 16:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:23.854 16:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:23.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.854 --rc genhtml_branch_coverage=1 00:31:23.854 --rc genhtml_function_coverage=1 00:31:23.854 --rc genhtml_legend=1 00:31:23.854 --rc geninfo_all_blocks=1 00:31:23.854 --rc geninfo_unexecuted_blocks=1 00:31:23.854 00:31:23.854 ' 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:23.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.854 --rc genhtml_branch_coverage=1 00:31:23.854 --rc genhtml_function_coverage=1 00:31:23.854 --rc genhtml_legend=1 00:31:23.854 --rc geninfo_all_blocks=1 00:31:23.854 --rc geninfo_unexecuted_blocks=1 00:31:23.854 00:31:23.854 ' 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:23.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.854 --rc genhtml_branch_coverage=1 00:31:23.854 --rc genhtml_function_coverage=1 00:31:23.854 --rc genhtml_legend=1 00:31:23.854 --rc geninfo_all_blocks=1 00:31:23.854 --rc geninfo_unexecuted_blocks=1 00:31:23.854 00:31:23.854 ' 00:31:23.854 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:23.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.854 --rc genhtml_branch_coverage=1 00:31:23.854 --rc genhtml_function_coverage=1 00:31:23.854 --rc genhtml_legend=1 00:31:23.854 --rc geninfo_all_blocks=1 00:31:23.854 --rc geninfo_unexecuted_blocks=1 00:31:23.854 00:31:23.854 ' 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.855 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.115 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:24.115 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:24.115 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:24.115 16:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:30.687 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:30.687 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.687 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:30.688 Found net devices under 0000:86:00.0: cvl_0_0 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:30.688 Found net devices under 0000:86:00.1: cvl_0_1 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:30.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:31:30.688 00:31:30.688 --- 10.0.0.2 ping statistics --- 00:31:30.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.688 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:31:30.688 16:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:31:30.688 00:31:30.688 --- 10.0.0.1 ping statistics --- 00:31:30.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.688 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2148196 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2148196 00:31:30.688 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2148196 ']' 00:31:30.689 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.689 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:30.689 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.689 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:30.689 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 [2024-11-20 16:33:01.102027] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:30.689 [2024-11-20 16:33:01.102955] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:31:30.689 [2024-11-20 16:33:01.102992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.689 [2024-11-20 16:33:01.185068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:30.689 [2024-11-20 16:33:01.229697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.689 [2024-11-20 16:33:01.229732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.689 [2024-11-20 16:33:01.229739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.689 [2024-11-20 16:33:01.229745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.689 [2024-11-20 16:33:01.229750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.689 [2024-11-20 16:33:01.231308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.689 [2024-11-20 16:33:01.231414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:30.689 [2024-11-20 16:33:01.231447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.689 [2024-11-20 16:33:01.231448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:30.689 [2024-11-20 16:33:01.299611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:30.689 [2024-11-20 16:33:01.300261] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:30.689 [2024-11-20 16:33:01.300575] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:30.689 [2024-11-20 16:33:01.300949] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:30.689 [2024-11-20 16:33:01.301002] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:30.949 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:30.949 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:30.949 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:30.949 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:30.949 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.949 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.949 16:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:30.949 [2024-11-20 16:33:02.144287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.209 16:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.209 16:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:31.209 16:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.468 16:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:31.468 16:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.726 16:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:31.726 16:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.985 16:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:31.985 16:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:32.243 16:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:32.243 16:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:32.243 16:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:32.500 16:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:32.500 16:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:32.758 16:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:32.758 16:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:33.016 16:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:33.016 16:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:33.016 16:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:33.273 16:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:33.273 16:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:33.530 16:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:33.786 [2024-11-20 16:33:04.784164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.786 16:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:33.786 16:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:34.043 16:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:34.299 16:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:34.299 16:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:34.299 16:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:34.299 16:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:34.299 16:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:34.299 16:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:36.821 16:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:36.821 16:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:36.821 16:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:36.821 16:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:36.821 16:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:36.821 16:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:36.821 16:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:36.821 [global] 00:31:36.821 thread=1 00:31:36.821 invalidate=1 00:31:36.821 rw=write 00:31:36.821 time_based=1 00:31:36.821 runtime=1 00:31:36.821 ioengine=libaio 00:31:36.821 direct=1 00:31:36.821 bs=4096 00:31:36.821 iodepth=1 00:31:36.821 norandommap=0 00:31:36.821 numjobs=1 00:31:36.821 00:31:36.821 verify_dump=1 00:31:36.821 verify_backlog=512 00:31:36.821 verify_state_save=0 00:31:36.821 do_verify=1 00:31:36.821 verify=crc32c-intel 00:31:36.821 [job0] 00:31:36.821 filename=/dev/nvme0n1 00:31:36.821 [job1] 00:31:36.821 filename=/dev/nvme0n2 00:31:36.821 [job2] 00:31:36.821 filename=/dev/nvme0n3 00:31:36.821 [job3] 00:31:36.821 filename=/dev/nvme0n4 00:31:36.821 Could not set queue depth (nvme0n1) 00:31:36.821 Could not set queue depth (nvme0n2) 00:31:36.821 Could not set queue depth (nvme0n3) 00:31:36.821 Could not set queue depth (nvme0n4) 00:31:36.821 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:36.821 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:36.821 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:36.821 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:36.821 fio-3.35 00:31:36.821 Starting 4 threads 00:31:38.205 00:31:38.205 job0: (groupid=0, jobs=1): err= 0: pid=2149902: Wed Nov 20 16:33:09 2024 00:31:38.205 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:31:38.205 slat (nsec): min=10119, max=22705, avg=21576.64, stdev=2629.44 00:31:38.205 clat (usec): min=39217, max=45056, avg=41070.73, stdev=965.26 00:31:38.205 lat (usec): min=39239, max=45076, avg=41092.30, stdev=964.86 00:31:38.205 clat percentiles (usec): 00:31:38.205 | 1.00th=[39060], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:38.205 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:38.205 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:38.205 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:31:38.205 | 99.99th=[44827] 00:31:38.205 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:31:38.205 slat (nsec): min=11188, max=43652, avg=12730.66, stdev=2665.44 00:31:38.205 clat (usec): min=150, max=269, avg=187.84, stdev=12.27 00:31:38.205 lat (usec): min=162, max=281, avg=200.57, stdev=12.46 00:31:38.205 clat percentiles (usec): 00:31:38.205 | 1.00th=[ 155], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 182], 00:31:38.205 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:31:38.205 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 204], 00:31:38.205 | 99.00th=[ 233], 99.50th=[ 249], 99.90th=[ 269], 99.95th=[ 269], 00:31:38.205 | 99.99th=[ 269] 00:31:38.205 bw ( KiB/s): min= 4096, max= 4096, per=25.25%, avg=4096.00, stdev= 0.00, samples=1 00:31:38.205 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:38.205 lat (usec) : 250=95.51%, 500=0.37% 00:31:38.205 lat (msec) : 50=4.12% 00:31:38.205 cpu : usr=0.50%, sys=0.89%, ctx=537, majf=0, minf=1 00:31:38.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.205 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.205 job1: (groupid=0, jobs=1): err= 0: pid=2149913: Wed Nov 20 16:33:09 2024 00:31:38.205 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:31:38.205 slat (nsec): min=9348, max=23178, avg=20108.95, stdev=5030.77 00:31:38.205 clat (usec): min=40838, max=41083, avg=40965.83, stdev=66.68 00:31:38.205 lat (usec): min=40861, max=41106, avg=40985.94, stdev=66.66 00:31:38.205 clat percentiles (usec): 00:31:38.205 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:38.205 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:38.205 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:38.205 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:38.205 | 99.99th=[41157] 00:31:38.205 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:31:38.205 slat (nsec): min=9274, max=42945, avg=10202.69, stdev=1832.52 00:31:38.205 clat (usec): min=145, max=348, avg=198.38, stdev=17.30 00:31:38.205 lat (usec): min=155, max=389, avg=208.58, stdev=17.90 00:31:38.205 clat percentiles (usec): 00:31:38.205 | 1.00th=[ 153], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:31:38.205 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 200], 00:31:38.205 | 70.00th=[ 204], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 221], 00:31:38.205 | 99.00th=[ 251], 99.50th=[ 258], 99.90th=[ 351], 99.95th=[ 351], 00:31:38.205 | 99.99th=[ 351] 00:31:38.205 bw ( KiB/s): min= 4096, max= 4096, per=25.25%, avg=4096.00, stdev= 0.00, samples=1 00:31:38.205 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:38.205 lat (usec) : 250=94.76%, 500=1.12% 00:31:38.205 lat (msec) : 50=4.12% 00:31:38.205 cpu : usr=0.20%, sys=0.50%, ctx=535, majf=0, minf=2 00:31:38.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.205 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.205 job2: (groupid=0, jobs=1): err= 0: pid=2149927: Wed Nov 20 16:33:09 2024 00:31:38.205 read: IOPS=2368, BW=9475KiB/s (9702kB/s)(9484KiB/1001msec) 00:31:38.205 slat (nsec): min=7188, max=34500, avg=8314.08, stdev=1389.32 00:31:38.205 clat (usec): min=202, max=283, avg=219.20, stdev= 7.97 00:31:38.205 lat (usec): min=209, max=291, avg=227.51, stdev= 8.17 00:31:38.205 clat percentiles (usec): 00:31:38.205 | 1.00th=[ 206], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:31:38.205 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 221], 00:31:38.205 | 70.00th=[ 223], 80.00th=[ 225], 90.00th=[ 229], 95.00th=[ 233], 00:31:38.205 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 258], 99.95th=[ 277], 00:31:38.206 | 99.99th=[ 285] 00:31:38.206 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:38.206 slat (nsec): min=10431, max=43183, avg=11513.80, stdev=1314.06 00:31:38.206 clat (usec): min=140, max=352, avg=162.70, stdev=19.69 00:31:38.206 lat (usec): min=151, max=364, avg=174.21, stdev=19.96 00:31:38.206 clat percentiles (usec): 00:31:38.206 | 1.00th=[ 145], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 151], 00:31:38.206 | 30.00th=[ 153], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:31:38.206 | 70.00th=[ 161], 80.00th=[ 174], 90.00th=[ 196], 95.00th=[ 202], 00:31:38.206 | 99.00th=[ 223], 99.50th=[ 243], 99.90th=[ 297], 99.95th=[ 334], 00:31:38.206 | 99.99th=[ 355] 00:31:38.206 bw ( KiB/s): min=11552, max=11552, per=71.21%, avg=11552.00, stdev= 0.00, samples=1 00:31:38.206 iops : min= 2888, max= 2888, avg=2888.00, stdev= 0.00, samples=1 00:31:38.206 lat (usec) : 250=99.47%, 500=0.53% 00:31:38.206 cpu : usr=4.00%, sys=7.90%, ctx=4932, majf=0, minf=1 00:31:38.206 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.206 issued rwts: total=2371,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.206 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.206 job3: (groupid=0, jobs=1): err= 0: pid=2149932: Wed Nov 20 16:33:09 2024 00:31:38.206 read: IOPS=22, BW=91.4KiB/s (93.6kB/s)(92.0KiB/1007msec) 00:31:38.206 slat (nsec): min=10699, max=25451, avg=22593.52, stdev=3771.08 00:31:38.206 clat (usec): min=241, max=41057, avg=39175.85, stdev=8487.78 00:31:38.206 lat (usec): min=264, max=41081, avg=39198.44, stdev=8487.72 00:31:38.206 clat percentiles (usec): 00:31:38.206 | 1.00th=[ 243], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:38.206 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:38.206 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:38.206 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:38.206 | 99.99th=[41157] 00:31:38.206 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:31:38.206 slat (nsec): min=11208, max=42678, avg=13023.42, stdev=2292.52 00:31:38.206 clat (usec): min=150, max=346, avg=188.50, stdev=16.91 00:31:38.206 lat (usec): min=162, max=361, avg=201.53, stdev=17.40 00:31:38.206 clat percentiles (usec): 00:31:38.206 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 180], 00:31:38.206 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:31:38.206 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 200], 95.00th=[ 206], 00:31:38.206 | 99.00th=[ 260], 99.50th=[ 330], 99.90th=[ 347], 99.95th=[ 347], 00:31:38.206 | 99.99th=[ 347] 00:31:38.206 bw ( KiB/s): min= 4096, max= 4096, per=25.25%, avg=4096.00, stdev= 0.00, samples=1 00:31:38.206 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:38.206 lat (usec) : 250=94.58%, 500=1.31% 00:31:38.206 lat (msec) : 50=4.11% 00:31:38.206 cpu : usr=0.30%, sys=1.19%, ctx=538, majf=0, minf=1 00:31:38.206 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.206 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.206 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.206 00:31:38.206 Run status group 0 (all jobs): 00:31:38.206 READ: bw=9655KiB/s (9887kB/s), 87.1KiB/s-9475KiB/s (89.2kB/s-9702kB/s), io=9752KiB (9986kB), run=1001-1010msec 00:31:38.206 WRITE: bw=15.8MiB/s (16.6MB/s), 2028KiB/s-9.99MiB/s (2076kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1010msec 00:31:38.206 00:31:38.206 Disk stats (read/write): 00:31:38.206 nvme0n1: ios=44/512, merge=0/0, ticks=1730/92, in_queue=1822, util=98.00% 00:31:38.206 nvme0n2: ios=33/512, merge=0/0, ticks=997/96, in_queue=1093, util=95.42% 00:31:38.206 nvme0n3: ios=2048/2128, merge=0/0, ticks=430/337, in_queue=767, util=88.96% 00:31:38.206 nvme0n4: ios=45/512, merge=0/0, ticks=1723/88, in_queue=1811, util=98.42% 00:31:38.206 16:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:38.206 [global] 00:31:38.206 thread=1 00:31:38.206 invalidate=1 00:31:38.206 rw=randwrite 00:31:38.206 time_based=1 00:31:38.206 runtime=1 00:31:38.206 ioengine=libaio 00:31:38.206 direct=1 00:31:38.206 bs=4096 00:31:38.206 iodepth=1 00:31:38.206 norandommap=0 00:31:38.206 numjobs=1 00:31:38.206 00:31:38.206 verify_dump=1 00:31:38.206 verify_backlog=512 00:31:38.206 verify_state_save=0 00:31:38.206 do_verify=1 00:31:38.206 verify=crc32c-intel 00:31:38.206 [job0] 00:31:38.206 filename=/dev/nvme0n1 00:31:38.206 [job1] 00:31:38.206 filename=/dev/nvme0n2 00:31:38.206 [job2] 00:31:38.206 filename=/dev/nvme0n3 00:31:38.206 [job3] 00:31:38.206 filename=/dev/nvme0n4 00:31:38.206 Could not set queue depth (nvme0n1) 00:31:38.206 Could not set queue depth (nvme0n2) 00:31:38.206 Could not set queue depth (nvme0n3) 00:31:38.206 Could not set queue depth (nvme0n4) 00:31:38.463 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:38.463 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:38.463 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:38.463 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:38.463 fio-3.35 00:31:38.463 Starting 4 threads 00:31:39.834 00:31:39.834 job0: (groupid=0, jobs=1): err= 0: pid=2150327: Wed Nov 20 16:33:10 2024 00:31:39.834 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:31:39.834 slat (nsec): min=9755, max=23315, avg=21734.50, stdev=3594.44 00:31:39.834 clat (usec): min=36255, max=42004, avg=40782.86, stdev=1038.50 00:31:39.834 lat (usec): min=36278, max=42026, avg=40804.59, stdev=1038.49 00:31:39.834 clat percentiles (usec): 00:31:39.834 | 1.00th=[36439], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:39.834 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:39.834 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:39.834 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:39.834 | 99.99th=[42206] 00:31:39.834 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:31:39.834 slat (nsec): min=9353, max=36857, avg=10745.32, stdev=2085.97 00:31:39.834 clat (usec): min=144, max=364, avg=188.62, stdev=20.99 00:31:39.834 lat (usec): min=154, max=375, avg=199.36, stdev=21.28 00:31:39.834 clat percentiles (usec): 00:31:39.834 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:31:39.834 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:31:39.834 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 221], 00:31:39.834 | 99.00th=[ 249], 99.50th=[ 314], 99.90th=[ 363], 99.95th=[ 363], 00:31:39.834 | 99.99th=[ 363] 00:31:39.834 bw ( KiB/s): min= 4096, max= 4096, per=29.69%, avg=4096.00, stdev= 0.00, samples=1 00:31:39.834 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:39.834 lat (usec) : 250=94.94%, 500=0.94% 00:31:39.834 lat (msec) : 50=4.12% 00:31:39.834 cpu : usr=0.50%, sys=0.30%, ctx=536, majf=0, minf=1 00:31:39.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.834 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:39.834 job1: (groupid=0, jobs=1): err= 0: pid=2150338: Wed Nov 20 16:33:10 2024 00:31:39.834 read: IOPS=22, BW=88.5KiB/s (90.7kB/s)(92.0KiB/1039msec) 00:31:39.834 slat (nsec): min=9162, max=23211, avg=21852.35, stdev=2817.57 00:31:39.834 clat (usec): min=40756, max=42124, avg=41002.74, stdev=252.38 00:31:39.834 lat (usec): min=40765, max=42145, avg=41024.59, stdev=252.68 00:31:39.834 clat percentiles (usec): 00:31:39.834 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:39.834 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:39.834 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:39.834 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:39.834 | 99.99th=[42206] 00:31:39.834 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:31:39.834 slat (nsec): min=8988, max=39057, avg=10123.33, stdev=1808.45 00:31:39.834 clat (usec): min=146, max=364, avg=174.32, stdev=25.17 00:31:39.834 lat (usec): min=157, max=403, avg=184.44, stdev=25.60 00:31:39.834 clat percentiles (usec): 00:31:39.834 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:31:39.834 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:31:39.834 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 198], 95.00th=[ 243], 00:31:39.834 | 99.00th=[ 247], 99.50th=[ 247], 99.90th=[ 367], 99.95th=[ 367], 00:31:39.834 | 99.99th=[ 367] 00:31:39.834 bw ( KiB/s): min= 4096, max= 4096, per=29.69%, avg=4096.00, stdev= 0.00, samples=1 00:31:39.834 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:39.834 lat (usec) : 250=95.51%, 500=0.19% 00:31:39.834 lat (msec) : 50=4.30% 00:31:39.834 cpu : usr=0.10%, sys=0.58%, ctx=535, majf=0, minf=1 00:31:39.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.834 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:39.834 job2: (groupid=0, jobs=1): err= 0: pid=2150351: Wed Nov 20 16:33:10 2024 00:31:39.834 read: IOPS=1030, BW=4122KiB/s (4221kB/s)(4196KiB/1018msec) 00:31:39.834 slat (nsec): min=7579, max=42046, avg=8911.08, stdev=2044.44 00:31:39.834 clat (usec): min=188, max=41434, avg=697.28, stdev=4316.39 00:31:39.834 lat (usec): min=196, max=41444, avg=706.19, stdev=4316.78 00:31:39.834 clat percentiles (usec): 00:31:39.834 | 1.00th=[ 192], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 210], 00:31:39.834 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:31:39.834 | 70.00th=[ 235], 80.00th=[ 249], 90.00th=[ 285], 95.00th=[ 289], 00:31:39.834 | 99.00th=[40633], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:31:39.834 | 99.99th=[41681] 00:31:39.834 write: IOPS=1508, BW=6035KiB/s (6180kB/s)(6144KiB/1018msec); 0 zone resets 00:31:39.834 slat (nsec): min=9900, max=48091, avg=12490.47, stdev=2371.63 00:31:39.834 clat (usec): min=131, max=476, avg=162.87, stdev=23.26 00:31:39.834 lat (usec): min=142, max=514, avg=175.36, stdev=24.22 00:31:39.834 clat percentiles (usec): 00:31:39.834 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 141], 00:31:39.834 | 30.00th=[ 145], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 167], 00:31:39.834 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 200], 00:31:39.834 | 99.00th=[ 215], 99.50th=[ 225], 99.90th=[ 273], 99.95th=[ 478], 00:31:39.834 | 99.99th=[ 478] 00:31:39.834 bw ( KiB/s): min= 1232, max=11056, per=44.53%, avg=6144.00, stdev=6946.62, samples=2 00:31:39.834 iops : min= 308, max= 2764, avg=1536.00, stdev=1736.65, samples=2 00:31:39.834 lat (usec) : 250=91.76%, 500=7.74% 00:31:39.834 lat (msec) : 2=0.04%, 50=0.46% 00:31:39.834 cpu : usr=2.26%, sys=4.03%, ctx=2586, majf=0, minf=1 00:31:39.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.834 issued rwts: total=1049,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:39.834 job3: (groupid=0, jobs=1): err= 0: pid=2150353: Wed Nov 20 16:33:10 2024 00:31:39.834 read: IOPS=916, BW=3664KiB/s (3752kB/s)(3668KiB/1001msec) 00:31:39.834 slat (nsec): min=7607, max=32192, avg=8852.71, stdev=2239.42 00:31:39.834 clat (usec): min=199, max=41167, avg=855.99, stdev=4817.53 00:31:39.834 lat (usec): min=208, max=41180, avg=864.84, stdev=4819.09 00:31:39.834 clat percentiles (usec): 00:31:39.834 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 245], 00:31:39.834 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 289], 60.00th=[ 293], 00:31:39.834 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 302], 95.00th=[ 310], 00:31:39.834 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:39.834 | 99.99th=[41157] 00:31:39.834 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:31:39.834 slat (nsec): min=10263, max=48314, avg=12414.40, stdev=3162.05 00:31:39.834 clat (usec): min=146, max=343, avg=183.89, stdev=21.39 00:31:39.834 lat (usec): min=159, max=380, avg=196.30, stdev=21.78 00:31:39.834 clat percentiles (usec): 00:31:39.834 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:31:39.834 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:31:39.834 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 239], 00:31:39.834 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 330], 99.95th=[ 343], 00:31:39.834 | 99.99th=[ 343] 00:31:39.834 bw ( KiB/s): min= 8192, max= 8192, per=59.37%, avg=8192.00, stdev= 0.00, samples=1 00:31:39.834 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:39.834 lat (usec) : 250=67.39%, 500=31.89% 00:31:39.834 lat (msec) : 10=0.05%, 50=0.67% 00:31:39.834 cpu : usr=1.60%, sys=3.20%, ctx=1941, majf=0, minf=1 00:31:39.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.834 issued rwts: total=917,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:39.834 00:31:39.834 Run status group 0 (all jobs): 00:31:39.834 READ: bw=7742KiB/s (7928kB/s), 87.9KiB/s-4122KiB/s (90.0kB/s-4221kB/s), io=8044KiB (8237kB), run=1001-1039msec 00:31:39.834 WRITE: bw=13.5MiB/s (14.1MB/s), 1971KiB/s-6035KiB/s (2018kB/s-6180kB/s), io=14.0MiB (14.7MB), run=1001-1039msec 00:31:39.834 00:31:39.834 Disk stats (read/write): 00:31:39.834 nvme0n1: ios=70/512, merge=0/0, ticks=1533/95, in_queue=1628, util=98.20% 00:31:39.834 nvme0n2: ios=33/512, merge=0/0, ticks=829/86, in_queue=915, util=88.02% 00:31:39.834 nvme0n3: ios=1065/1536, merge=0/0, ticks=1541/226, in_queue=1767, util=98.44% 00:31:39.834 nvme0n4: ios=638/1024, merge=0/0, ticks=623/174, in_queue=797, util=89.63% 00:31:39.834 16:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:39.834 [global] 00:31:39.834 thread=1 00:31:39.834 invalidate=1 00:31:39.834 rw=write 00:31:39.834 time_based=1 00:31:39.834 runtime=1 00:31:39.834 ioengine=libaio 00:31:39.834 direct=1 00:31:39.835 bs=4096 00:31:39.835 iodepth=128 00:31:39.835 norandommap=0 00:31:39.835 numjobs=1 00:31:39.835 00:31:39.835 verify_dump=1 00:31:39.835 verify_backlog=512 00:31:39.835 verify_state_save=0 00:31:39.835 do_verify=1 00:31:39.835 verify=crc32c-intel 00:31:39.835 [job0] 00:31:39.835 filename=/dev/nvme0n1 00:31:39.835 [job1] 00:31:39.835 filename=/dev/nvme0n2 00:31:39.835 [job2] 00:31:39.835 filename=/dev/nvme0n3 00:31:39.835 [job3] 00:31:39.835 filename=/dev/nvme0n4 00:31:39.835 Could not set queue depth (nvme0n1) 00:31:39.835 Could not set queue depth (nvme0n2) 00:31:39.835 Could not set queue depth (nvme0n3) 00:31:39.835 Could not set queue depth (nvme0n4) 00:31:39.835 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:39.835 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:39.835 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:39.835 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:39.835 fio-3.35 00:31:39.835 Starting 4 threads 00:31:41.209 00:31:41.209 job0: (groupid=0, jobs=1): err= 0: pid=2150722: Wed Nov 20 16:33:12 2024 00:31:41.209 read: IOPS=4865, BW=19.0MiB/s (19.9MB/s)(19.2MiB/1008msec) 00:31:41.209 slat (nsec): min=1034, max=12323k, avg=91817.63, stdev=605977.17 00:31:41.209 clat (usec): min=2440, max=27908, avg=11632.31, stdev=3764.99 00:31:41.209 lat (usec): min=5192, max=27911, avg=11724.13, stdev=3806.73 00:31:41.209 clat percentiles (usec): 00:31:41.209 | 1.00th=[ 5669], 5.00th=[ 7570], 10.00th=[ 8160], 20.00th=[ 9372], 00:31:41.209 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:31:41.209 | 70.00th=[12387], 80.00th=[13435], 90.00th=[16057], 95.00th=[19530], 00:31:41.209 | 99.00th=[25560], 99.50th=[26084], 99.90th=[27919], 99.95th=[27919], 00:31:41.209 | 99.99th=[27919] 00:31:41.209 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:31:41.209 slat (nsec): min=1906, max=12364k, avg=102195.76, stdev=562331.81 00:31:41.209 clat (usec): min=3314, max=41940, avg=13817.51, stdev=6304.45 00:31:41.209 lat (usec): min=3322, max=41948, avg=13919.70, stdev=6346.05 00:31:41.209 clat percentiles (usec): 00:31:41.209 | 1.00th=[ 4883], 5.00th=[ 7767], 10.00th=[ 8094], 20.00th=[ 9110], 00:31:41.209 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10552], 60.00th=[13042], 00:31:41.209 | 70.00th=[16057], 80.00th=[18482], 90.00th=[24511], 95.00th=[26346], 00:31:41.209 | 99.00th=[31589], 99.50th=[36963], 99.90th=[41681], 99.95th=[41681], 00:31:41.209 | 99.99th=[41681] 00:31:41.209 bw ( KiB/s): min=16384, max=24576, per=31.74%, avg=20480.00, stdev=5792.62, samples=2 00:31:41.209 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:31:41.209 lat (msec) : 4=0.11%, 10=35.67%, 20=53.07%, 50=11.14% 00:31:41.209 cpu : usr=2.88%, sys=4.87%, ctx=542, majf=0, minf=1 00:31:41.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:41.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:41.209 issued rwts: total=4904,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:41.209 job1: (groupid=0, jobs=1): err= 0: pid=2150723: Wed Nov 20 16:33:12 2024 00:31:41.209 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:31:41.209 slat (nsec): min=1118, max=26291k, avg=108188.98, stdev=866167.17 00:31:41.209 clat (usec): min=1901, max=47231, avg=13621.99, stdev=7227.88 00:31:41.209 lat (usec): min=1909, max=47235, avg=13730.18, stdev=7297.62 00:31:41.209 clat percentiles (usec): 00:31:41.209 | 1.00th=[ 4293], 5.00th=[ 6194], 10.00th=[ 6980], 20.00th=[ 8979], 00:31:41.209 | 30.00th=[ 9634], 40.00th=[11469], 50.00th=[12125], 60.00th=[12518], 00:31:41.209 | 70.00th=[12911], 80.00th=[16581], 90.00th=[27132], 95.00th=[30802], 00:31:41.209 | 99.00th=[37487], 99.50th=[40633], 99.90th=[47449], 99.95th=[47449], 00:31:41.209 | 99.99th=[47449] 00:31:41.209 write: IOPS=3914, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1007msec); 0 zone resets 00:31:41.209 slat (usec): min=2, max=16319, avg=140.25, stdev=930.79 00:31:41.209 clat (usec): min=1913, max=116448, avg=19828.07, stdev=19447.60 00:31:41.209 lat (usec): min=1921, max=116459, avg=19968.32, stdev=19565.18 00:31:41.209 clat percentiles (msec): 00:31:41.209 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:31:41.209 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:31:41.209 | 70.00th=[ 17], 80.00th=[ 33], 90.00th=[ 46], 95.00th=[ 55], 00:31:41.209 | 99.00th=[ 103], 99.50th=[ 110], 99.90th=[ 117], 99.95th=[ 117], 00:31:41.209 | 99.99th=[ 117] 00:31:41.209 bw ( KiB/s): min=12288, max=18224, per=23.64%, avg=15256.00, stdev=4197.39, samples=2 00:31:41.209 iops : min= 3072, max= 4556, avg=3814.00, stdev=1049.35, samples=2 00:31:41.209 lat (msec) : 2=0.17%, 4=0.45%, 10=29.26%, 20=49.65%, 50=16.61% 00:31:41.209 lat (msec) : 100=3.04%, 250=0.81% 00:31:41.209 cpu : usr=1.49%, sys=5.07%, ctx=308, majf=0, minf=1 00:31:41.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:41.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:41.209 issued rwts: total=3584,3942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:41.209 job2: (groupid=0, jobs=1): err= 0: pid=2150724: Wed Nov 20 16:33:12 2024 00:31:41.209 read: IOPS=2890, BW=11.3MiB/s (11.8MB/s)(11.8MiB/1049msec) 00:31:41.209 slat (nsec): min=1682, max=9485.3k, avg=125870.16, stdev=713126.08 00:31:41.209 clat (usec): min=7011, max=66094, avg=17858.33, stdev=10776.21 00:31:41.209 lat (usec): min=7016, max=66100, avg=17984.20, stdev=10790.60 00:31:41.209 clat percentiles (usec): 00:31:41.209 | 1.00th=[ 9372], 5.00th=[10421], 10.00th=[10814], 20.00th=[11994], 00:31:41.209 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13960], 60.00th=[15401], 00:31:41.209 | 70.00th=[17957], 80.00th=[20841], 90.00th=[25297], 95.00th=[50594], 00:31:41.209 | 99.00th=[62653], 99.50th=[65799], 99.90th=[65799], 99.95th=[66323], 00:31:41.209 | 99.99th=[66323] 00:31:41.209 write: IOPS=2928, BW=11.4MiB/s (12.0MB/s)(12.0MiB/1049msec); 0 zone resets 00:31:41.209 slat (usec): min=2, max=29236, avg=195.98, stdev=1161.43 00:31:41.209 clat (usec): min=6981, max=65051, avg=25050.32, stdev=13688.53 00:31:41.209 lat (usec): min=6992, max=65085, avg=25246.31, stdev=13767.21 00:31:41.209 clat percentiles (usec): 00:31:41.209 | 1.00th=[10290], 5.00th=[10814], 10.00th=[11338], 20.00th=[13173], 00:31:41.209 | 30.00th=[15533], 40.00th=[17433], 50.00th=[20579], 60.00th=[24249], 00:31:41.209 | 70.00th=[29492], 80.00th=[35914], 90.00th=[47973], 95.00th=[53740], 00:31:41.209 | 99.00th=[64226], 99.50th=[64226], 99.90th=[64226], 99.95th=[64226], 00:31:41.209 | 99.99th=[65274] 00:31:41.209 bw ( KiB/s): min=12288, max=12288, per=19.04%, avg=12288.00, stdev= 0.00, samples=2 00:31:41.209 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:31:41.209 lat (msec) : 10=2.69%, 20=59.63%, 50=30.49%, 100=7.19% 00:31:41.209 cpu : usr=2.77%, sys=4.68%, ctx=342, majf=0, minf=1 00:31:41.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:41.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:41.209 issued rwts: total=3032,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:41.209 job3: (groupid=0, jobs=1): err= 0: pid=2150725: Wed Nov 20 16:33:12 2024 00:31:41.209 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:31:41.209 slat (nsec): min=1302, max=34452k, avg=104471.08, stdev=929446.99 00:31:41.209 clat (usec): min=2321, max=56210, avg=14185.04, stdev=6568.57 00:31:41.209 lat (usec): min=2328, max=66997, avg=14289.51, stdev=6635.56 00:31:41.209 clat percentiles (usec): 00:31:41.209 | 1.00th=[ 4047], 5.00th=[ 7308], 10.00th=[ 9110], 20.00th=[10421], 00:31:41.209 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12518], 60.00th=[12911], 00:31:41.209 | 70.00th=[14091], 80.00th=[16712], 90.00th=[21365], 95.00th=[31589], 00:31:41.209 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:31:41.209 | 99.99th=[56361] 00:31:41.209 write: IOPS=4742, BW=18.5MiB/s (19.4MB/s)(18.7MiB/1010msec); 0 zone resets 00:31:41.209 slat (usec): min=2, max=17587, avg=96.96, stdev=715.02 00:31:41.209 clat (usec): min=847, max=49150, avg=13094.47, stdev=6177.09 00:31:41.209 lat (usec): min=856, max=49161, avg=13191.42, stdev=6218.13 00:31:41.209 clat percentiles (usec): 00:31:41.209 | 1.00th=[ 2802], 5.00th=[ 5997], 10.00th=[ 7308], 20.00th=[ 9241], 00:31:41.209 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[12125], 00:31:41.209 | 70.00th=[13042], 80.00th=[18220], 90.00th=[23462], 95.00th=[25035], 00:31:41.209 | 99.00th=[31065], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:31:41.209 | 99.99th=[49021] 00:31:41.209 bw ( KiB/s): min=17640, max=19664, per=28.90%, avg=18652.00, stdev=1431.18, samples=2 00:31:41.209 iops : min= 4410, max= 4916, avg=4663.00, stdev=357.80, samples=2 00:31:41.210 lat (usec) : 1000=0.05% 00:31:41.210 lat (msec) : 2=0.18%, 4=1.46%, 10=18.11%, 20=66.46%, 50=13.73% 00:31:41.210 lat (msec) : 100=0.01% 00:31:41.210 cpu : usr=4.96%, sys=5.85%, ctx=307, majf=0, minf=1 00:31:41.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:41.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:41.210 issued rwts: total=4608,4790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:41.210 00:31:41.210 Run status group 0 (all jobs): 00:31:41.210 READ: bw=60.1MiB/s (63.0MB/s), 11.3MiB/s-19.0MiB/s (11.8MB/s-19.9MB/s), io=63.0MiB (66.1MB), run=1007-1049msec 00:31:41.210 WRITE: bw=63.0MiB/s (66.1MB/s), 11.4MiB/s-19.8MiB/s (12.0MB/s-20.8MB/s), io=66.1MiB (69.3MB), run=1007-1049msec 00:31:41.210 00:31:41.210 Disk stats (read/write): 00:31:41.210 nvme0n1: ios=4302/4608, merge=0/0, ticks=22681/30580, in_queue=53261, util=99.60% 00:31:41.210 nvme0n2: ios=2611/2981, merge=0/0, ticks=29579/48927, in_queue=78506, util=98.68% 00:31:41.210 nvme0n3: ios=2618/2815, merge=0/0, ticks=12880/22034, in_queue=34914, util=98.65% 00:31:41.210 nvme0n4: ios=3606/4094, merge=0/0, ticks=46413/50815, in_queue=97228, util=98.22% 00:31:41.210 16:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:41.210 [global] 00:31:41.210 thread=1 00:31:41.210 invalidate=1 00:31:41.210 rw=randwrite 00:31:41.210 time_based=1 00:31:41.210 runtime=1 00:31:41.210 ioengine=libaio 00:31:41.210 direct=1 00:31:41.210 bs=4096 00:31:41.210 iodepth=128 00:31:41.210 norandommap=0 00:31:41.210 numjobs=1 00:31:41.210 00:31:41.210 verify_dump=1 00:31:41.210 verify_backlog=512 00:31:41.210 verify_state_save=0 00:31:41.210 do_verify=1 00:31:41.210 verify=crc32c-intel 00:31:41.210 [job0] 00:31:41.210 filename=/dev/nvme0n1 00:31:41.210 [job1] 00:31:41.210 filename=/dev/nvme0n2 00:31:41.210 [job2] 00:31:41.210 filename=/dev/nvme0n3 00:31:41.210 [job3] 00:31:41.210 filename=/dev/nvme0n4 00:31:41.210 Could not set queue depth (nvme0n1) 00:31:41.210 Could not set queue depth (nvme0n2) 00:31:41.210 Could not set queue depth (nvme0n3) 00:31:41.210 Could not set queue depth (nvme0n4) 00:31:41.467 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:41.467 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:41.467 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:41.467 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:41.467 fio-3.35 00:31:41.467 Starting 4 threads 00:31:42.843 00:31:42.843 job0: (groupid=0, jobs=1): err= 0: pid=2151098: Wed Nov 20 16:33:13 2024 00:31:42.843 read: IOPS=4173, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1005msec) 00:31:42.843 slat (nsec): min=1304, max=26303k, avg=108205.59, stdev=719182.26 00:31:42.843 clat (usec): min=1375, max=96602, avg=13659.28, stdev=10559.06 00:31:42.843 lat (usec): min=5448, max=97894, avg=13767.48, stdev=10612.79 00:31:42.843 clat percentiles (usec): 00:31:42.843 | 1.00th=[ 8094], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10290], 00:31:42.843 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[11207], 00:31:42.843 | 70.00th=[11338], 80.00th=[12256], 90.00th=[13960], 95.00th=[39060], 00:31:42.843 | 99.00th=[61080], 99.50th=[65799], 99.90th=[96994], 99.95th=[96994], 00:31:42.843 | 99.99th=[96994] 00:31:42.843 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:31:42.843 slat (usec): min=2, max=16484, avg=114.38, stdev=766.58 00:31:42.843 clat (usec): min=7577, max=68249, avg=15110.19, stdev=11672.83 00:31:42.843 lat (usec): min=7581, max=68262, avg=15224.58, stdev=11738.23 00:31:42.843 clat percentiles (usec): 00:31:42.843 | 1.00th=[ 8029], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9896], 00:31:42.843 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:31:42.843 | 70.00th=[11207], 80.00th=[13173], 90.00th=[37487], 95.00th=[43254], 00:31:42.843 | 99.00th=[58983], 99.50th=[63177], 99.90th=[68682], 99.95th=[68682], 00:31:42.843 | 99.99th=[68682] 00:31:42.843 bw ( KiB/s): min=12048, max=24576, per=25.44%, avg=18312.00, stdev=8858.63, samples=2 00:31:42.843 iops : min= 3012, max= 6144, avg=4578.00, stdev=2214.66, samples=2 00:31:42.843 lat (msec) : 2=0.01%, 10=17.30%, 20=71.40%, 50=8.21%, 100=3.07% 00:31:42.843 cpu : usr=2.59%, sys=4.48%, ctx=525, majf=0, minf=1 00:31:42.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:42.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:42.843 issued rwts: total=4194,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:42.843 job1: (groupid=0, jobs=1): err= 0: pid=2151099: Wed Nov 20 16:33:13 2024 00:31:42.843 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:31:42.843 slat (nsec): min=1384, max=11267k, avg=89299.24, stdev=681218.66 00:31:42.843 clat (usec): min=6150, max=36813, avg=12175.41, stdev=3605.68 00:31:42.843 lat (usec): min=6158, max=36818, avg=12264.71, stdev=3650.00 00:31:42.843 clat percentiles (usec): 00:31:42.843 | 1.00th=[ 6980], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[ 9896], 00:31:42.843 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11076], 60.00th=[11994], 00:31:42.843 | 70.00th=[12780], 80.00th=[14615], 90.00th=[16057], 95.00th=[17957], 00:31:42.843 | 99.00th=[28705], 99.50th=[32637], 99.90th=[35914], 99.95th=[36963], 00:31:42.843 | 99.99th=[36963] 00:31:42.843 write: IOPS=5378, BW=21.0MiB/s (22.0MB/s)(21.2MiB/1008msec); 0 zone resets 00:31:42.843 slat (usec): min=2, max=27492, avg=92.58, stdev=717.86 00:31:42.843 clat (usec): min=1775, max=36806, avg=12034.98, stdev=6202.77 00:31:42.843 lat (usec): min=1786, max=36813, avg=12127.57, stdev=6225.62 00:31:42.843 clat percentiles (usec): 00:31:42.843 | 1.00th=[ 4686], 5.00th=[ 5342], 10.00th=[ 6194], 20.00th=[ 8586], 00:31:42.843 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[10421], 60.00th=[11338], 00:31:42.843 | 70.00th=[11731], 80.00th=[14353], 90.00th=[19006], 95.00th=[26870], 00:31:42.843 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:31:42.843 | 99.99th=[36963] 00:31:42.843 bw ( KiB/s): min=20480, max=21872, per=29.42%, avg=21176.00, stdev=984.29, samples=2 00:31:42.843 iops : min= 5120, max= 5468, avg=5294.00, stdev=246.07, samples=2 00:31:42.843 lat (msec) : 2=0.04%, 4=0.13%, 10=28.85%, 20=65.07%, 50=5.91% 00:31:42.843 cpu : usr=4.27%, sys=6.85%, ctx=332, majf=0, minf=1 00:31:42.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:42.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:42.843 issued rwts: total=5120,5422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:42.843 job2: (groupid=0, jobs=1): err= 0: pid=2151100: Wed Nov 20 16:33:13 2024 00:31:42.843 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:31:42.843 slat (nsec): min=1361, max=16511k, avg=99008.19, stdev=865578.79 00:31:42.843 clat (usec): min=865, max=36573, avg=12706.96, stdev=4584.45 00:31:42.843 lat (usec): min=991, max=36585, avg=12805.97, stdev=4646.92 00:31:42.843 clat percentiles (usec): 00:31:42.843 | 1.00th=[ 4883], 5.00th=[ 7570], 10.00th=[ 8225], 20.00th=[ 9503], 00:31:42.843 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:31:42.843 | 70.00th=[14091], 80.00th=[16319], 90.00th=[19006], 95.00th=[20579], 00:31:42.843 | 99.00th=[30802], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:31:42.843 | 99.99th=[36439] 00:31:42.843 write: IOPS=5021, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1003msec); 0 zone resets 00:31:42.843 slat (usec): min=2, max=16922, avg=92.63, stdev=764.54 00:31:42.843 clat (usec): min=534, max=67094, avg=13631.65, stdev=9070.97 00:31:42.843 lat (usec): min=1476, max=67782, avg=13724.29, stdev=9134.48 00:31:42.843 clat percentiles (usec): 00:31:42.843 | 1.00th=[ 4424], 5.00th=[ 6063], 10.00th=[ 7111], 20.00th=[ 8848], 00:31:42.843 | 30.00th=[ 9896], 40.00th=[10945], 50.00th=[11469], 60.00th=[11731], 00:31:42.843 | 70.00th=[12649], 80.00th=[16909], 90.00th=[21365], 95.00th=[22938], 00:31:42.843 | 99.00th=[62653], 99.50th=[64750], 99.90th=[66847], 99.95th=[66847], 00:31:42.843 | 99.99th=[66847] 00:31:42.843 bw ( KiB/s): min=18888, max=20384, per=27.28%, avg=19636.00, stdev=1057.83, samples=2 00:31:42.843 iops : min= 4722, max= 5096, avg=4909.00, stdev=264.46, samples=2 00:31:42.843 lat (usec) : 750=0.01%, 1000=0.01% 00:31:42.843 lat (msec) : 2=0.08%, 4=0.30%, 10=27.33%, 20=62.80%, 50=8.39% 00:31:42.843 lat (msec) : 100=1.08% 00:31:42.843 cpu : usr=2.89%, sys=7.09%, ctx=307, majf=0, minf=1 00:31:42.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:42.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:42.843 issued rwts: total=4608,5037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:42.843 job3: (groupid=0, jobs=1): err= 0: pid=2151101: Wed Nov 20 16:33:13 2024 00:31:42.843 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:31:42.843 slat (nsec): min=1639, max=20601k, avg=169850.80, stdev=1236480.74 00:31:42.843 clat (usec): min=518, max=106888, avg=20502.10, stdev=22693.36 00:31:42.843 lat (msec): min=2, max=106, avg=20.67, stdev=22.87 00:31:42.843 clat percentiles (msec): 00:31:42.843 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 10], 20.00th=[ 12], 00:31:42.843 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:31:42.843 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 29], 95.00th=[ 99], 00:31:42.843 | 99.00th=[ 106], 99.50th=[ 107], 99.90th=[ 107], 99.95th=[ 107], 00:31:42.843 | 99.99th=[ 107] 00:31:42.843 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:31:42.843 slat (usec): min=2, max=19854, avg=142.24, stdev=1174.01 00:31:42.843 clat (usec): min=1213, max=95297, avg=20900.03, stdev=15817.94 00:31:42.843 lat (usec): min=1249, max=95304, avg=21042.27, stdev=15894.96 00:31:42.843 clat percentiles (usec): 00:31:42.843 | 1.00th=[ 2540], 5.00th=[ 7439], 10.00th=[ 8586], 20.00th=[ 9503], 00:31:42.843 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13566], 60.00th=[17957], 00:31:42.843 | 70.00th=[21627], 80.00th=[31327], 90.00th=[47449], 95.00th=[57934], 00:31:42.843 | 99.00th=[74974], 99.50th=[93848], 99.90th=[93848], 99.95th=[94897], 00:31:42.843 | 99.99th=[94897] 00:31:42.843 bw ( KiB/s): min= 8600, max=15976, per=17.07%, avg=12288.00, stdev=5215.62, samples=2 00:31:42.843 iops : min= 2150, max= 3994, avg=3072.00, stdev=1303.90, samples=2 00:31:42.843 lat (usec) : 750=0.02% 00:31:42.843 lat (msec) : 2=0.02%, 4=2.20%, 10=16.98%, 20=54.88%, 50=18.12% 00:31:42.843 lat (msec) : 100=5.84%, 250=1.95% 00:31:42.843 cpu : usr=2.20%, sys=2.99%, ctx=252, majf=0, minf=1 00:31:42.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:42.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:42.843 issued rwts: total=3072,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:42.843 00:31:42.843 Run status group 0 (all jobs): 00:31:42.844 READ: bw=65.9MiB/s (69.1MB/s), 12.0MiB/s-19.8MiB/s (12.5MB/s-20.8MB/s), io=66.4MiB (69.6MB), run=1003-1008msec 00:31:42.844 WRITE: bw=70.3MiB/s (73.7MB/s), 12.0MiB/s-21.0MiB/s (12.5MB/s-22.0MB/s), io=70.9MiB (74.3MB), run=1003-1008msec 00:31:42.844 00:31:42.844 Disk stats (read/write): 00:31:42.844 nvme0n1: ios=3641/4096, merge=0/0, ticks=12674/14147, in_queue=26821, util=87.27% 00:31:42.844 nvme0n2: ios=4484/4608, merge=0/0, ticks=52131/48934, in_queue=101065, util=91.08% 00:31:42.844 nvme0n3: ios=3924/4096, merge=0/0, ticks=48535/56715, in_queue=105250, util=94.70% 00:31:42.844 nvme0n4: ios=2200/2560, merge=0/0, ticks=26422/29681, in_queue=56103, util=95.29% 00:31:42.844 16:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:42.844 16:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2151327 00:31:42.844 16:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:42.844 16:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:42.844 [global] 00:31:42.844 thread=1 00:31:42.844 invalidate=1 00:31:42.844 rw=read 00:31:42.844 time_based=1 00:31:42.844 runtime=10 00:31:42.844 ioengine=libaio 00:31:42.844 direct=1 00:31:42.844 bs=4096 00:31:42.844 iodepth=1 00:31:42.844 norandommap=1 00:31:42.844 numjobs=1 00:31:42.844 00:31:42.844 [job0] 00:31:42.844 filename=/dev/nvme0n1 00:31:42.844 [job1] 00:31:42.844 filename=/dev/nvme0n2 00:31:42.844 [job2] 00:31:42.844 filename=/dev/nvme0n3 00:31:42.844 [job3] 00:31:42.844 filename=/dev/nvme0n4 00:31:42.844 Could not set queue depth (nvme0n1) 00:31:42.844 Could not set queue depth (nvme0n2) 00:31:42.844 Could not set queue depth (nvme0n3) 00:31:42.844 Could not set queue depth (nvme0n4) 00:31:43.123 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.123 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.123 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.123 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.123 fio-3.35 00:31:43.123 Starting 4 threads 00:31:45.763 16:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:46.020 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=35266560, buflen=4096 00:31:46.020 fio: pid=2151475, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:46.020 16:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:46.278 16:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:46.278 16:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:46.278 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=311296, buflen=4096 00:31:46.278 fio: pid=2151474, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:46.536 16:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:46.536 16:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:46.536 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=35315712, buflen=4096 00:31:46.536 fio: pid=2151471, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:46.536 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=53366784, buflen=4096 00:31:46.536 fio: pid=2151473, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:46.536 16:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:46.536 16:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:46.536 00:31:46.536 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2151471: Wed Nov 20 16:33:17 2024 00:31:46.536 read: IOPS=2748, BW=10.7MiB/s (11.3MB/s)(33.7MiB/3137msec) 00:31:46.536 slat (usec): min=5, max=15815, avg=11.00, stdev=170.22 00:31:46.536 clat (usec): min=187, max=42632, avg=348.01, stdev=1815.05 00:31:46.536 lat (usec): min=196, max=42656, avg=359.01, stdev=1823.22 00:31:46.536 clat percentiles (usec): 00:31:46.536 | 1.00th=[ 198], 5.00th=[ 217], 10.00th=[ 227], 20.00th=[ 241], 00:31:46.536 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:31:46.536 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 351], 00:31:46.536 | 99.00th=[ 412], 99.50th=[ 461], 99.90th=[41157], 99.95th=[41157], 00:31:46.536 | 99.99th=[42730] 00:31:46.536 bw ( KiB/s): min= 1256, max=15048, per=30.84%, avg=11294.17, stdev=5282.69, samples=6 00:31:46.536 iops : min= 314, max= 3762, avg=2823.50, stdev=1320.69, samples=6 00:31:46.536 lat (usec) : 250=35.61%, 500=64.01%, 750=0.16% 00:31:46.536 lat (msec) : 50=0.20% 00:31:46.536 cpu : usr=1.63%, sys=4.75%, ctx=8625, majf=0, minf=1 00:31:46.536 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.536 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.536 issued rwts: total=8623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.536 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.536 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2151473: Wed Nov 20 16:33:17 2024 00:31:46.536 read: IOPS=3931, BW=15.4MiB/s (16.1MB/s)(50.9MiB/3314msec) 00:31:46.536 slat (usec): min=6, max=26636, avg=13.34, stdev=268.38 00:31:46.536 clat (usec): min=172, max=11951, avg=237.65, stdev=109.93 00:31:46.536 lat (usec): min=180, max=27007, avg=250.99, stdev=293.42 00:31:46.536 clat percentiles (usec): 00:31:46.536 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 215], 00:31:46.536 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:31:46.536 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 306], 00:31:46.536 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 578], 99.95th=[ 783], 00:31:46.536 | 99.99th=[ 1004] 00:31:46.536 bw ( KiB/s): min=14920, max=17912, per=43.47%, avg=15916.67, stdev=1058.71, samples=6 00:31:46.536 iops : min= 3730, max= 4478, avg=3979.17, stdev=264.68, samples=6 00:31:46.536 lat (usec) : 250=76.49%, 500=23.32%, 750=0.13%, 1000=0.04% 00:31:46.536 lat (msec) : 2=0.01%, 20=0.01% 00:31:46.536 cpu : usr=1.15%, sys=4.26%, ctx=13036, majf=0, minf=1 00:31:46.536 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.536 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.537 issued rwts: total=13030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.537 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2151474: Wed Nov 20 16:33:17 2024 00:31:46.537 read: IOPS=26, BW=104KiB/s (106kB/s)(304KiB/2936msec) 00:31:46.537 slat (nsec): min=4314, max=36847, avg=18808.17, stdev=6986.55 00:31:46.537 clat (usec): min=355, max=41974, avg=38318.54, stdev=10120.23 00:31:46.537 lat (usec): min=392, max=41997, avg=38337.29, stdev=10117.40 00:31:46.537 clat percentiles (usec): 00:31:46.537 | 1.00th=[ 355], 5.00th=[ 510], 10.00th=[40633], 20.00th=[40633], 00:31:46.537 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:46.537 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:46.537 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:46.537 | 99.99th=[42206] 00:31:46.537 bw ( KiB/s): min= 96, max= 112, per=0.28%, avg=104.00, stdev= 8.00, samples=5 00:31:46.537 iops : min= 24, max= 28, avg=26.00, stdev= 2.00, samples=5 00:31:46.537 lat (usec) : 500=3.90%, 750=2.60% 00:31:46.537 lat (msec) : 50=92.21% 00:31:46.537 cpu : usr=0.07%, sys=0.00%, ctx=77, majf=0, minf=2 00:31:46.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.537 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.537 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.537 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2151475: Wed Nov 20 16:33:17 2024 00:31:46.537 read: IOPS=3211, BW=12.5MiB/s (13.2MB/s)(33.6MiB/2681msec) 00:31:46.537 slat (nsec): min=6672, max=36056, avg=7751.49, stdev=1128.85 00:31:46.537 clat (usec): min=202, max=41310, avg=299.94, stdev=1455.55 00:31:46.537 lat (usec): min=209, max=41322, avg=307.69, stdev=1456.06 00:31:46.537 clat percentiles (usec): 00:31:46.537 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:31:46.537 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:31:46.537 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 289], 00:31:46.537 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[41157], 99.95th=[41157], 00:31:46.537 | 99.99th=[41157] 00:31:46.537 bw ( KiB/s): min= 8440, max=15320, per=35.01%, avg=12819.20, stdev=3301.94, samples=5 00:31:46.537 iops : min= 2110, max= 3830, avg=3204.80, stdev=825.48, samples=5 00:31:46.537 lat (usec) : 250=61.18%, 500=38.66%, 750=0.01% 00:31:46.537 lat (msec) : 2=0.01%, 50=0.13% 00:31:46.537 cpu : usr=0.41%, sys=3.43%, ctx=8613, majf=0, minf=2 00:31:46.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.537 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.537 issued rwts: total=8611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.537 00:31:46.537 Run status group 0 (all jobs): 00:31:46.537 READ: bw=35.8MiB/s (37.5MB/s), 104KiB/s-15.4MiB/s (106kB/s-16.1MB/s), io=119MiB (124MB), run=2681-3314msec 00:31:46.537 00:31:46.537 Disk stats (read/write): 00:31:46.537 nvme0n1: ios=8618/0, merge=0/0, ticks=2816/0, in_queue=2816, util=93.96% 00:31:46.537 nvme0n2: ios=12172/0, merge=0/0, ticks=2808/0, in_queue=2808, util=94.04% 00:31:46.537 nvme0n3: ios=72/0, merge=0/0, ticks=2790/0, in_queue=2790, util=96.21% 00:31:46.537 nvme0n4: ios=8272/0, merge=0/0, ticks=2899/0, in_queue=2899, util=98.68% 00:31:46.795 16:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:46.795 16:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:47.052 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:47.053 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:47.310 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:47.310 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:47.567 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:47.567 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:47.567 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:47.567 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2151327 00:31:47.567 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:47.567 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:47.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:47.826 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:47.826 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:47.826 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:47.826 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:47.826 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:47.826 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:47.826 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:47.826 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:47.826 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:47.826 nvmf hotplug test: fio failed as expected 00:31:47.826 16:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:48.085 rmmod nvme_tcp 00:31:48.085 rmmod nvme_fabrics 00:31:48.085 rmmod nvme_keyring 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2148196 ']' 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2148196 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2148196 ']' 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2148196 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2148196 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2148196' 00:31:48.085 killing process with pid 2148196 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2148196 00:31:48.085 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2148196 00:31:48.344 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:48.344 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:48.344 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:48.344 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:48.344 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:48.344 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:48.344 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:48.344 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:48.344 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:48.344 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.344 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.344 16:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.249 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:50.249 00:31:50.249 real 0m26.575s 00:31:50.249 user 1m31.804s 00:31:50.249 sys 0m11.539s 00:31:50.249 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:50.249 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:50.249 ************************************ 00:31:50.249 END TEST nvmf_fio_target 00:31:50.249 ************************************ 00:31:50.509 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:50.509 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:50.509 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:50.509 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:50.509 ************************************ 00:31:50.509 START TEST nvmf_bdevio 00:31:50.509 ************************************ 00:31:50.509 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:50.509 * Looking for test storage... 00:31:50.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:50.509 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:50.509 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:31:50.509 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:50.509 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:50.509 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.509 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:50.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.510 --rc genhtml_branch_coverage=1 00:31:50.510 --rc genhtml_function_coverage=1 00:31:50.510 --rc genhtml_legend=1 00:31:50.510 --rc geninfo_all_blocks=1 00:31:50.510 --rc geninfo_unexecuted_blocks=1 00:31:50.510 00:31:50.510 ' 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:50.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.510 --rc genhtml_branch_coverage=1 00:31:50.510 --rc genhtml_function_coverage=1 00:31:50.510 --rc genhtml_legend=1 00:31:50.510 --rc geninfo_all_blocks=1 00:31:50.510 --rc geninfo_unexecuted_blocks=1 00:31:50.510 00:31:50.510 ' 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:50.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.510 --rc genhtml_branch_coverage=1 00:31:50.510 --rc genhtml_function_coverage=1 00:31:50.510 --rc genhtml_legend=1 00:31:50.510 --rc geninfo_all_blocks=1 00:31:50.510 --rc geninfo_unexecuted_blocks=1 00:31:50.510 00:31:50.510 ' 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:50.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.510 --rc genhtml_branch_coverage=1 00:31:50.510 --rc genhtml_function_coverage=1 00:31:50.510 --rc genhtml_legend=1 00:31:50.510 --rc geninfo_all_blocks=1 00:31:50.510 --rc geninfo_unexecuted_blocks=1 00:31:50.510 00:31:50.510 ' 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:50.510 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:50.511 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.511 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:50.511 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:50.511 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:50.511 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.511 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.511 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.511 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:50.511 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:50.511 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:50.511 16:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:57.082 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:57.082 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.082 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:57.083 Found net devices under 0000:86:00.0: cvl_0_0 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:57.083 Found net devices under 0000:86:00.1: cvl_0_1 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:57.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:31:57.083 00:31:57.083 --- 10.0.0.2 ping statistics --- 00:31:57.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.083 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:57.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:31:57.083 00:31:57.083 --- 10.0.0.1 ping statistics --- 00:31:57.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.083 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2155716 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2155716 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2155716 ']' 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:57.083 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.083 [2024-11-20 16:33:27.690555] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:57.083 [2024-11-20 16:33:27.691502] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:31:57.084 [2024-11-20 16:33:27.691541] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:57.084 [2024-11-20 16:33:27.771593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:57.084 [2024-11-20 16:33:27.813246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:57.084 [2024-11-20 16:33:27.813281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:57.084 [2024-11-20 16:33:27.813288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:57.084 [2024-11-20 16:33:27.813295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:57.084 [2024-11-20 16:33:27.813300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:57.084 [2024-11-20 16:33:27.814960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:57.084 [2024-11-20 16:33:27.815082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:57.084 [2024-11-20 16:33:27.815188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:57.084 [2024-11-20 16:33:27.815189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:57.084 [2024-11-20 16:33:27.883329] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:57.084 [2024-11-20 16:33:27.884368] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:57.084 [2024-11-20 16:33:27.884370] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:57.084 [2024-11-20 16:33:27.884781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:57.084 [2024-11-20 16:33:27.884821] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:57.084 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:57.084 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:57.084 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:57.084 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:57.084 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.084 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:57.084 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:57.084 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.084 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.084 [2024-11-20 16:33:27.951991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:57.084 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.084 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:57.084 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.084 16:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.084 Malloc0 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.084 [2024-11-20 16:33:28.032143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:57.084 { 00:31:57.084 "params": { 00:31:57.084 "name": "Nvme$subsystem", 00:31:57.084 "trtype": "$TEST_TRANSPORT", 00:31:57.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:57.084 "adrfam": "ipv4", 00:31:57.084 "trsvcid": "$NVMF_PORT", 00:31:57.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:57.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:57.084 "hdgst": ${hdgst:-false}, 00:31:57.084 "ddgst": ${ddgst:-false} 00:31:57.084 }, 00:31:57.084 "method": "bdev_nvme_attach_controller" 00:31:57.084 } 00:31:57.084 EOF 00:31:57.084 )") 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:57.084 16:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:57.084 "params": { 00:31:57.084 "name": "Nvme1", 00:31:57.084 "trtype": "tcp", 00:31:57.084 "traddr": "10.0.0.2", 00:31:57.084 "adrfam": "ipv4", 00:31:57.084 "trsvcid": "4420", 00:31:57.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:57.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:57.084 "hdgst": false, 00:31:57.084 "ddgst": false 00:31:57.084 }, 00:31:57.084 "method": "bdev_nvme_attach_controller" 00:31:57.084 }' 00:31:57.084 [2024-11-20 16:33:28.083533] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:31:57.084 [2024-11-20 16:33:28.083579] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155743 ] 00:31:57.084 [2024-11-20 16:33:28.158694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:57.084 [2024-11-20 16:33:28.202513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.084 [2024-11-20 16:33:28.202619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.084 [2024-11-20 16:33:28.202619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:57.340 I/O targets: 00:31:57.340 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:57.340 00:31:57.340 00:31:57.340 CUnit - A unit testing framework for C - Version 2.1-3 00:31:57.340 http://cunit.sourceforge.net/ 00:31:57.340 00:31:57.340 00:31:57.340 Suite: bdevio tests on: Nvme1n1 00:31:57.340 Test: blockdev write read block ...passed 00:31:57.597 Test: blockdev write zeroes read block ...passed 00:31:57.597 Test: blockdev write zeroes read no split ...passed 00:31:57.597 Test: blockdev write zeroes read split ...passed 00:31:57.597 Test: blockdev write zeroes read split partial ...passed 00:31:57.597 Test: blockdev reset ...[2024-11-20 16:33:28.622648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:57.597 [2024-11-20 16:33:28.622709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e2340 (9): Bad file descriptor 00:31:57.597 [2024-11-20 16:33:28.715177] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:57.597 passed 00:31:57.597 Test: blockdev write read 8 blocks ...passed 00:31:57.597 Test: blockdev write read size > 128k ...passed 00:31:57.597 Test: blockdev write read invalid size ...passed 00:31:57.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:57.597 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:57.597 Test: blockdev write read max offset ...passed 00:31:57.854 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:57.854 Test: blockdev writev readv 8 blocks ...passed 00:31:57.854 Test: blockdev writev readv 30 x 1block ...passed 00:31:57.854 Test: blockdev writev readv block ...passed 00:31:57.854 Test: blockdev writev readv size > 128k ...passed 00:31:57.854 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:57.854 Test: blockdev comparev and writev ...[2024-11-20 16:33:28.925075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.854 [2024-11-20 16:33:28.925102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.854 [2024-11-20 16:33:28.925116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.854 [2024-11-20 16:33:28.925124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:57.854 [2024-11-20 16:33:28.925420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.854 [2024-11-20 16:33:28.925431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:57.854 [2024-11-20 16:33:28.925442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.854 [2024-11-20 16:33:28.925449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:57.854 [2024-11-20 16:33:28.925725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.854 [2024-11-20 16:33:28.925734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:57.854 [2024-11-20 16:33:28.925746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.854 [2024-11-20 16:33:28.925753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:57.854 [2024-11-20 16:33:28.926026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.854 [2024-11-20 16:33:28.926037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:57.854 [2024-11-20 16:33:28.926049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.854 [2024-11-20 16:33:28.926056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:57.854 passed 00:31:57.854 Test: blockdev nvme passthru rw ...passed 00:31:57.854 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:33:29.007585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:57.854 [2024-11-20 16:33:29.007599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:57.854 [2024-11-20 16:33:29.007705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:57.854 [2024-11-20 16:33:29.007714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:57.854 [2024-11-20 16:33:29.007817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:57.854 [2024-11-20 16:33:29.007826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:57.854 [2024-11-20 16:33:29.007933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:57.854 [2024-11-20 16:33:29.007942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:57.854 passed 00:31:57.854 Test: blockdev nvme admin passthru ...passed 00:31:57.854 Test: blockdev copy ...passed 00:31:57.854 00:31:57.854 Run Summary: Type Total Ran Passed Failed Inactive 00:31:57.854 suites 1 1 n/a 0 0 00:31:57.854 tests 23 23 23 0 0 00:31:57.854 asserts 152 152 152 0 n/a 00:31:57.854 00:31:57.854 Elapsed time = 1.108 seconds 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:58.114 rmmod nvme_tcp 00:31:58.114 rmmod nvme_fabrics 00:31:58.114 rmmod nvme_keyring 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2155716 ']' 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2155716 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2155716 ']' 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2155716 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2155716 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:58.114 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2155716' 00:31:58.114 killing process with pid 2155716 00:31:58.115 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2155716 00:31:58.115 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2155716 00:31:58.372 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:58.372 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:58.372 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:58.372 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:58.372 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:58.372 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:58.372 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:58.372 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:58.372 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:58.372 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.372 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.372 16:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.908 16:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:00.908 00:32:00.908 real 0m10.068s 00:32:00.908 user 0m9.506s 00:32:00.908 sys 0m5.191s 00:32:00.908 16:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.908 16:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.908 ************************************ 00:32:00.908 END TEST nvmf_bdevio 00:32:00.908 ************************************ 00:32:00.908 16:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:00.908 00:32:00.908 real 4m33.050s 00:32:00.908 user 9m5.000s 00:32:00.908 sys 1m51.692s 00:32:00.908 16:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.908 16:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:00.908 ************************************ 00:32:00.908 END TEST nvmf_target_core_interrupt_mode 00:32:00.908 ************************************ 00:32:00.908 16:33:31 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:00.908 16:33:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:00.908 16:33:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.908 16:33:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:00.908 ************************************ 00:32:00.908 START TEST nvmf_interrupt 00:32:00.908 ************************************ 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:00.908 * Looking for test storage... 00:32:00.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:00.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.908 --rc genhtml_branch_coverage=1 00:32:00.908 --rc genhtml_function_coverage=1 00:32:00.908 --rc genhtml_legend=1 00:32:00.908 --rc geninfo_all_blocks=1 00:32:00.908 --rc geninfo_unexecuted_blocks=1 00:32:00.908 00:32:00.908 ' 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:00.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.908 --rc genhtml_branch_coverage=1 00:32:00.908 --rc genhtml_function_coverage=1 00:32:00.908 --rc genhtml_legend=1 00:32:00.908 --rc geninfo_all_blocks=1 00:32:00.908 --rc geninfo_unexecuted_blocks=1 00:32:00.908 00:32:00.908 ' 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:00.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.908 --rc genhtml_branch_coverage=1 00:32:00.908 --rc genhtml_function_coverage=1 00:32:00.908 --rc genhtml_legend=1 00:32:00.908 --rc geninfo_all_blocks=1 00:32:00.908 --rc geninfo_unexecuted_blocks=1 00:32:00.908 00:32:00.908 ' 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:00.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.908 --rc genhtml_branch_coverage=1 00:32:00.908 --rc genhtml_function_coverage=1 00:32:00.908 --rc genhtml_legend=1 00:32:00.908 --rc geninfo_all_blocks=1 00:32:00.908 --rc geninfo_unexecuted_blocks=1 00:32:00.908 00:32:00.908 ' 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.908 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:00.909 16:33:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:07.479 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:07.479 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:07.479 Found net devices under 0000:86:00.0: cvl_0_0 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:07.479 Found net devices under 0000:86:00.1: cvl_0_1 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:07.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:32:07.479 00:32:07.479 --- 10.0.0.2 ping statistics --- 00:32:07.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.479 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:32:07.479 00:32:07.479 --- 10.0.0.1 ping statistics --- 00:32:07.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.479 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2159519 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2159519 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2159519 ']' 00:32:07.479 16:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.480 16:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:07.480 16:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.480 16:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:07.480 16:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.480 [2024-11-20 16:33:37.856092] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:07.480 [2024-11-20 16:33:37.856995] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:32:07.480 [2024-11-20 16:33:37.857028] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.480 [2024-11-20 16:33:37.937448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:07.480 [2024-11-20 16:33:37.978039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.480 [2024-11-20 16:33:37.978074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.480 [2024-11-20 16:33:37.978081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.480 [2024-11-20 16:33:37.978087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.480 [2024-11-20 16:33:37.978092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.480 [2024-11-20 16:33:37.979302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.480 [2024-11-20 16:33:37.979304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.480 [2024-11-20 16:33:38.046262] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:07.480 [2024-11-20 16:33:38.046807] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:07.480 [2024-11-20 16:33:38.047018] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:07.480 5000+0 records in 00:32:07.480 5000+0 records out 00:32:07.480 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0174261 s, 588 MB/s 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.480 AIO0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.480 [2024-11-20 16:33:38.172081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.480 [2024-11-20 16:33:38.208411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2159519 0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2159519 0 idle 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2159519 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2159519 -w 256 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2159519 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0' 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2159519 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2159519 1 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2159519 1 idle 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2159519 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2159519 -w 256 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2159529 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2159529 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2159633 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2159519 0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2159519 0 busy 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2159519 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2159519 -w 256 00:32:07.480 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:07.737 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2159519 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:00.26 reactor_0' 00:32:07.737 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2159519 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:00.26 reactor_0 00:32:07.737 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:07.737 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:07.737 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:07.737 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:07.737 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:07.738 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:07.738 16:33:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:32:08.669 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:32:08.669 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:08.669 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2159519 -w 256 00:32:08.669 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2159519 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.55 reactor_0' 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2159519 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.55 reactor_0 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2159519 1 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2159519 1 busy 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2159519 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2159519 -w 256 00:32:08.926 16:33:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:08.926 16:33:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2159529 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.33 reactor_1' 00:32:08.926 16:33:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2159529 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.33 reactor_1 00:32:08.926 16:33:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:08.926 16:33:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:08.926 16:33:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:08.926 16:33:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:08.926 16:33:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:08.926 16:33:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:08.926 16:33:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:08.926 16:33:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:08.926 16:33:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2159633 00:32:18.883 Initializing NVMe Controllers 00:32:18.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:18.883 Controller IO queue size 256, less than required. 00:32:18.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:18.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:18.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:18.883 Initialization complete. Launching workers. 00:32:18.883 ======================================================== 00:32:18.883 Latency(us) 00:32:18.884 Device Information : IOPS MiB/s Average min max 00:32:18.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16255.90 63.50 15755.82 3199.56 30196.07 00:32:18.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16425.40 64.16 15594.21 7523.70 57788.52 00:32:18.884 ======================================================== 00:32:18.884 Total : 32681.30 127.66 15674.60 3199.56 57788.52 00:32:18.884 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2159519 0 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2159519 0 idle 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2159519 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2159519 -w 256 00:32:18.884 16:33:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2159519 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.24 reactor_0' 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2159519 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.24 reactor_0 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2159519 1 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2159519 1 idle 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2159519 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2159519 -w 256 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2159529 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2159529 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:18.884 16:33:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2159519 0 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2159519 0 idle 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2159519 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2159519 -w 256 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2159519 root 20 0 128.2g 72960 33792 S 6.2 0.0 0:20.48 reactor_0' 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2159519 root 20 0 128.2g 72960 33792 S 6.2 0.0 0:20.48 reactor_0 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2159519 1 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2159519 1 idle 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2159519 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2159519 -w 256 00:32:20.789 16:33:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2159529 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1' 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2159529 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:21.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:21.049 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:21.050 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:21.050 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:21.050 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:21.050 16:33:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:21.050 16:33:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:21.050 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:21.050 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:21.050 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:21.050 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:21.050 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:21.050 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:21.050 rmmod nvme_tcp 00:32:21.050 rmmod nvme_fabrics 00:32:21.050 rmmod nvme_keyring 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2159519 ']' 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2159519 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2159519 ']' 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2159519 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2159519 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2159519' 00:32:21.309 killing process with pid 2159519 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2159519 00:32:21.309 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2159519 00:32:21.568 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:21.568 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:21.568 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:21.568 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:21.568 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:21.568 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:21.568 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:21.568 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:21.568 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:21.568 16:33:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.568 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.568 16:33:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.474 16:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:23.474 00:32:23.474 real 0m22.942s 00:32:23.474 user 0m39.871s 00:32:23.474 sys 0m8.393s 00:32:23.474 16:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:23.474 16:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:23.474 ************************************ 00:32:23.474 END TEST nvmf_interrupt 00:32:23.474 ************************************ 00:32:23.474 00:32:23.474 real 27m29.427s 00:32:23.474 user 56m37.323s 00:32:23.474 sys 9m16.816s 00:32:23.474 16:33:54 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:23.474 16:33:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.474 ************************************ 00:32:23.474 END TEST nvmf_tcp 00:32:23.474 ************************************ 00:32:23.734 16:33:54 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:23.734 16:33:54 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:23.734 16:33:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:23.734 16:33:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:23.734 16:33:54 -- common/autotest_common.sh@10 -- # set +x 00:32:23.734 ************************************ 00:32:23.734 START TEST spdkcli_nvmf_tcp 00:32:23.734 ************************************ 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:23.734 * Looking for test storage... 00:32:23.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:23.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.734 --rc genhtml_branch_coverage=1 00:32:23.734 --rc genhtml_function_coverage=1 00:32:23.734 --rc genhtml_legend=1 00:32:23.734 --rc geninfo_all_blocks=1 00:32:23.734 --rc geninfo_unexecuted_blocks=1 00:32:23.734 00:32:23.734 ' 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:23.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.734 --rc genhtml_branch_coverage=1 00:32:23.734 --rc genhtml_function_coverage=1 00:32:23.734 --rc genhtml_legend=1 00:32:23.734 --rc geninfo_all_blocks=1 00:32:23.734 --rc geninfo_unexecuted_blocks=1 00:32:23.734 00:32:23.734 ' 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:23.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.734 --rc genhtml_branch_coverage=1 00:32:23.734 --rc genhtml_function_coverage=1 00:32:23.734 --rc genhtml_legend=1 00:32:23.734 --rc geninfo_all_blocks=1 00:32:23.734 --rc geninfo_unexecuted_blocks=1 00:32:23.734 00:32:23.734 ' 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:23.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.734 --rc genhtml_branch_coverage=1 00:32:23.734 --rc genhtml_function_coverage=1 00:32:23.734 --rc genhtml_legend=1 00:32:23.734 --rc geninfo_all_blocks=1 00:32:23.734 --rc geninfo_unexecuted_blocks=1 00:32:23.734 00:32:23.734 ' 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:23.734 16:33:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:23.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:23.735 16:33:54 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2162471 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2162471 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2162471 ']' 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:23.994 16:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.994 [2024-11-20 16:33:55.017070] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:32:23.994 [2024-11-20 16:33:55.017116] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162471 ] 00:32:23.994 [2024-11-20 16:33:55.089651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:23.994 [2024-11-20 16:33:55.130757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.994 [2024-11-20 16:33:55.130759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.253 16:33:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:24.253 16:33:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:24.253 16:33:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:24.253 16:33:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:24.253 16:33:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.253 16:33:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:24.253 16:33:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:24.253 16:33:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:24.253 16:33:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.253 16:33:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.253 16:33:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:24.253 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:24.253 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:24.253 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:24.253 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:24.253 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:24.253 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:24.253 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:24.253 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:24.253 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:24.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:24.253 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:24.253 ' 00:32:26.780 [2024-11-20 16:33:57.969444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.160 [2024-11-20 16:33:59.305888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:30.689 [2024-11-20 16:34:01.793604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:33.215 [2024-11-20 16:34:03.948283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:34.587 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:34.587 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:34.587 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:34.587 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:34.587 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:34.587 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:34.587 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:34.587 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:34.587 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:34.587 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:34.587 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:34.587 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:34.587 16:34:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:34.588 16:34:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.588 16:34:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.588 16:34:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:34.588 16:34:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.588 16:34:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.588 16:34:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:34.588 16:34:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:35.153 16:34:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:35.153 16:34:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:35.153 16:34:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:35.153 16:34:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:35.153 16:34:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:35.153 16:34:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:35.153 16:34:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:35.153 16:34:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:35.153 16:34:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:35.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:35.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:35.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:35.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:35.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:35.153 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:35.153 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:35.153 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:35.153 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:35.153 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:35.153 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:35.153 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:35.153 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:35.153 ' 00:32:41.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:41.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:41.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:41.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:41.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:41.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:41.713 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:41.713 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:41.713 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:41.713 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:41.713 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:41.713 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:41.713 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:41.713 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2162471 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2162471 ']' 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2162471 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2162471 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2162471' 00:32:41.713 killing process with pid 2162471 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2162471 00:32:41.713 16:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2162471 00:32:41.713 16:34:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:41.713 16:34:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:41.713 16:34:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2162471 ']' 00:32:41.713 16:34:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2162471 00:32:41.713 16:34:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2162471 ']' 00:32:41.713 16:34:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2162471 00:32:41.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2162471) - No such process 00:32:41.713 16:34:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2162471 is not found' 00:32:41.713 Process with pid 2162471 is not found 00:32:41.713 16:34:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:41.713 16:34:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:41.713 16:34:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:41.713 00:32:41.713 real 0m17.330s 00:32:41.713 user 0m38.163s 00:32:41.713 sys 0m0.807s 00:32:41.713 16:34:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.713 16:34:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:41.713 ************************************ 00:32:41.713 END TEST spdkcli_nvmf_tcp 00:32:41.713 ************************************ 00:32:41.713 16:34:12 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:41.713 16:34:12 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:41.713 16:34:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:41.713 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:32:41.713 ************************************ 00:32:41.713 START TEST nvmf_identify_passthru 00:32:41.713 ************************************ 00:32:41.713 16:34:12 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:41.713 * Looking for test storage... 00:32:41.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:41.713 16:34:12 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:41.713 16:34:12 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:32:41.713 16:34:12 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:41.713 16:34:12 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.713 16:34:12 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:41.713 16:34:12 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.713 16:34:12 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:41.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.713 --rc genhtml_branch_coverage=1 00:32:41.713 --rc genhtml_function_coverage=1 00:32:41.713 --rc genhtml_legend=1 00:32:41.713 --rc geninfo_all_blocks=1 00:32:41.713 --rc geninfo_unexecuted_blocks=1 00:32:41.713 00:32:41.713 ' 00:32:41.714 16:34:12 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:41.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.714 --rc genhtml_branch_coverage=1 00:32:41.714 --rc genhtml_function_coverage=1 00:32:41.714 --rc genhtml_legend=1 00:32:41.714 --rc geninfo_all_blocks=1 00:32:41.714 --rc geninfo_unexecuted_blocks=1 00:32:41.714 00:32:41.714 ' 00:32:41.714 16:34:12 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:41.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.714 --rc genhtml_branch_coverage=1 00:32:41.714 --rc genhtml_function_coverage=1 00:32:41.714 --rc genhtml_legend=1 00:32:41.714 --rc geninfo_all_blocks=1 00:32:41.714 --rc geninfo_unexecuted_blocks=1 00:32:41.714 00:32:41.714 ' 00:32:41.714 16:34:12 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:41.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.714 --rc genhtml_branch_coverage=1 00:32:41.714 --rc genhtml_function_coverage=1 00:32:41.714 --rc genhtml_legend=1 00:32:41.714 --rc geninfo_all_blocks=1 00:32:41.714 --rc geninfo_unexecuted_blocks=1 00:32:41.714 00:32:41.714 ' 00:32:41.714 16:34:12 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.714 16:34:12 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.714 16:34:12 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.714 16:34:12 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.714 16:34:12 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.714 16:34:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.714 16:34:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.714 16:34:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.714 16:34:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:41.714 16:34:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:41.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.714 16:34:12 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.714 16:34:12 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.714 16:34:12 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.714 16:34:12 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.714 16:34:12 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.714 16:34:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.714 16:34:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.714 16:34:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.714 16:34:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:41.714 16:34:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.714 16:34:12 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.714 16:34:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:41.714 16:34:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:41.714 16:34:12 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.714 16:34:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:46.989 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.989 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.989 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.989 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.989 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.989 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.989 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.989 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.989 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.989 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:46.989 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.989 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:46.989 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:46.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:46.990 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:46.990 Found net devices under 0000:86:00.0: cvl_0_0 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:46.990 Found net devices under 0000:86:00.1: cvl_0_1 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.990 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:47.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:32:47.249 00:32:47.249 --- 10.0.0.2 ping statistics --- 00:32:47.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.249 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:32:47.249 00:32:47.249 --- 10.0.0.1 ping statistics --- 00:32:47.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.249 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:47.249 16:34:18 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:47.249 16:34:18 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.249 16:34:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:47.249 16:34:18 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:47.249 16:34:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:47.249 16:34:18 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:47.249 16:34:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:47.249 16:34:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:47.249 16:34:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:52.659 16:34:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:32:52.659 16:34:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:52.659 16:34:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:52.659 16:34:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:56.846 16:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:56.846 16:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:56.846 16:34:27 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:56.846 16:34:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.846 16:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:56.846 16:34:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:56.846 16:34:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.846 16:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2169748 00:32:56.846 16:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:56.846 16:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:56.846 16:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2169748 00:32:56.846 16:34:27 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2169748 ']' 00:32:56.846 16:34:27 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.846 16:34:27 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.846 16:34:27 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.846 16:34:27 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.846 16:34:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.846 [2024-11-20 16:34:27.874635] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:32:56.846 [2024-11-20 16:34:27.874679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:56.846 [2024-11-20 16:34:27.954501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:56.846 [2024-11-20 16:34:27.996695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:56.846 [2024-11-20 16:34:27.996731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:56.846 [2024-11-20 16:34:27.996738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:56.846 [2024-11-20 16:34:27.996743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:56.846 [2024-11-20 16:34:27.996748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:56.846 [2024-11-20 16:34:27.998325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.846 [2024-11-20 16:34:27.998424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:56.846 [2024-11-20 16:34:27.998543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.846 [2024-11-20 16:34:27.998545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:56.846 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:56.846 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:56.846 16:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:56.846 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.846 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.846 INFO: Log level set to 20 00:32:56.846 INFO: Requests: 00:32:56.846 { 00:32:56.846 "jsonrpc": "2.0", 00:32:56.846 "method": "nvmf_set_config", 00:32:56.846 "id": 1, 00:32:56.846 "params": { 00:32:56.846 "admin_cmd_passthru": { 00:32:56.846 "identify_ctrlr": true 00:32:56.846 } 00:32:56.846 } 00:32:56.846 } 00:32:56.846 00:32:56.846 INFO: response: 00:32:56.846 { 00:32:56.847 "jsonrpc": "2.0", 00:32:56.847 "id": 1, 00:32:56.847 "result": true 00:32:56.847 } 00:32:56.847 00:32:56.847 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.847 16:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:56.847 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.847 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.847 INFO: Setting log level to 20 00:32:56.847 INFO: Setting log level to 20 00:32:56.847 INFO: Log level set to 20 00:32:56.847 INFO: Log level set to 20 00:32:56.847 INFO: Requests: 00:32:56.847 { 00:32:56.847 "jsonrpc": "2.0", 00:32:56.847 "method": "framework_start_init", 00:32:56.847 "id": 1 00:32:56.847 } 00:32:56.847 00:32:56.847 INFO: Requests: 00:32:56.847 { 00:32:56.847 "jsonrpc": "2.0", 00:32:56.847 "method": "framework_start_init", 00:32:56.847 "id": 1 00:32:56.847 } 00:32:56.847 00:32:57.105 [2024-11-20 16:34:28.106481] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:57.105 INFO: response: 00:32:57.105 { 00:32:57.105 "jsonrpc": "2.0", 00:32:57.105 "id": 1, 00:32:57.105 "result": true 00:32:57.105 } 00:32:57.105 00:32:57.105 INFO: response: 00:32:57.105 { 00:32:57.105 "jsonrpc": "2.0", 00:32:57.105 "id": 1, 00:32:57.105 "result": true 00:32:57.105 } 00:32:57.105 00:32:57.105 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.105 16:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:57.105 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.105 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.105 INFO: Setting log level to 40 00:32:57.105 INFO: Setting log level to 40 00:32:57.105 INFO: Setting log level to 40 00:32:57.105 [2024-11-20 16:34:28.119798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.105 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.105 16:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:57.105 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.105 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.105 16:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:57.105 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.105 16:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.384 Nvme0n1 00:33:00.384 16:34:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.384 16:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:00.384 16:34:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.384 16:34:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.385 [2024-11-20 16:34:31.029696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.385 [ 00:33:00.385 { 00:33:00.385 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:00.385 "subtype": "Discovery", 00:33:00.385 "listen_addresses": [], 00:33:00.385 "allow_any_host": true, 00:33:00.385 "hosts": [] 00:33:00.385 }, 00:33:00.385 { 00:33:00.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.385 "subtype": "NVMe", 00:33:00.385 "listen_addresses": [ 00:33:00.385 { 00:33:00.385 "trtype": "TCP", 00:33:00.385 "adrfam": "IPv4", 00:33:00.385 "traddr": "10.0.0.2", 00:33:00.385 "trsvcid": "4420" 00:33:00.385 } 00:33:00.385 ], 00:33:00.385 "allow_any_host": true, 00:33:00.385 "hosts": [], 00:33:00.385 "serial_number": "SPDK00000000000001", 00:33:00.385 "model_number": "SPDK bdev Controller", 00:33:00.385 "max_namespaces": 1, 00:33:00.385 "min_cntlid": 1, 00:33:00.385 "max_cntlid": 65519, 00:33:00.385 "namespaces": [ 00:33:00.385 { 00:33:00.385 "nsid": 1, 00:33:00.385 "bdev_name": "Nvme0n1", 00:33:00.385 "name": "Nvme0n1", 00:33:00.385 "nguid": "197AAC809A9A41DA88D24C087389C96D", 00:33:00.385 "uuid": "197aac80-9a9a-41da-88d2-4c087389c96d" 00:33:00.385 } 00:33:00.385 ] 00:33:00.385 } 00:33:00.385 ] 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:00.385 16:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:00.385 16:34:31 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:00.385 16:34:31 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:00.385 16:34:31 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:00.385 16:34:31 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:00.385 16:34:31 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:00.385 16:34:31 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:00.385 rmmod nvme_tcp 00:33:00.385 rmmod nvme_fabrics 00:33:00.385 rmmod nvme_keyring 00:33:00.385 16:34:31 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:00.385 16:34:31 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:00.385 16:34:31 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:00.385 16:34:31 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2169748 ']' 00:33:00.385 16:34:31 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2169748 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2169748 ']' 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2169748 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2169748 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2169748' 00:33:00.385 killing process with pid 2169748 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2169748 00:33:00.385 16:34:31 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2169748 00:33:02.911 16:34:33 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:02.911 16:34:33 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:02.911 16:34:33 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:02.911 16:34:33 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:02.911 16:34:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:02.911 16:34:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:02.911 16:34:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:02.911 16:34:33 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:02.911 16:34:33 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:02.911 16:34:33 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.911 16:34:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:02.911 16:34:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.816 16:34:35 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:04.816 00:33:04.816 real 0m23.450s 00:33:04.816 user 0m29.761s 00:33:04.816 sys 0m6.327s 00:33:04.816 16:34:35 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:04.816 16:34:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:04.816 ************************************ 00:33:04.816 END TEST nvmf_identify_passthru 00:33:04.816 ************************************ 00:33:04.816 16:34:35 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:04.816 16:34:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:04.816 16:34:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:04.816 16:34:35 -- common/autotest_common.sh@10 -- # set +x 00:33:04.816 ************************************ 00:33:04.816 START TEST nvmf_dif 00:33:04.816 ************************************ 00:33:04.816 16:34:35 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:04.816 * Looking for test storage... 00:33:04.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:04.816 16:34:35 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:04.816 16:34:35 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:04.816 16:34:35 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:04.816 16:34:35 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:04.816 16:34:35 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:04.816 16:34:35 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:04.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.816 --rc genhtml_branch_coverage=1 00:33:04.816 --rc genhtml_function_coverage=1 00:33:04.816 --rc genhtml_legend=1 00:33:04.816 --rc geninfo_all_blocks=1 00:33:04.816 --rc geninfo_unexecuted_blocks=1 00:33:04.816 00:33:04.816 ' 00:33:04.816 16:34:35 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:04.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.816 --rc genhtml_branch_coverage=1 00:33:04.816 --rc genhtml_function_coverage=1 00:33:04.816 --rc genhtml_legend=1 00:33:04.816 --rc geninfo_all_blocks=1 00:33:04.816 --rc geninfo_unexecuted_blocks=1 00:33:04.816 00:33:04.816 ' 00:33:04.816 16:34:35 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:04.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.816 --rc genhtml_branch_coverage=1 00:33:04.816 --rc genhtml_function_coverage=1 00:33:04.816 --rc genhtml_legend=1 00:33:04.816 --rc geninfo_all_blocks=1 00:33:04.816 --rc geninfo_unexecuted_blocks=1 00:33:04.816 00:33:04.816 ' 00:33:04.816 16:34:35 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:04.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.816 --rc genhtml_branch_coverage=1 00:33:04.816 --rc genhtml_function_coverage=1 00:33:04.816 --rc genhtml_legend=1 00:33:04.816 --rc geninfo_all_blocks=1 00:33:04.816 --rc geninfo_unexecuted_blocks=1 00:33:04.816 00:33:04.816 ' 00:33:04.816 16:34:35 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:04.816 16:34:35 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.816 16:34:35 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.816 16:34:35 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.816 16:34:35 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.817 16:34:35 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.817 16:34:35 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:04.817 16:34:35 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:04.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:04.817 16:34:35 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:04.817 16:34:35 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:04.817 16:34:35 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:04.817 16:34:35 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:04.817 16:34:35 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.817 16:34:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:04.817 16:34:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:04.817 16:34:35 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:04.817 16:34:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:11.385 16:34:41 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:11.386 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:11.386 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:11.386 Found net devices under 0000:86:00.0: cvl_0_0 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:11.386 Found net devices under 0000:86:00.1: cvl_0_1 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:11.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:11.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:33:11.386 00:33:11.386 --- 10.0.0.2 ping statistics --- 00:33:11.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.386 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:11.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:11.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:33:11.386 00:33:11.386 --- 10.0.0.1 ping statistics --- 00:33:11.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.386 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:11.386 16:34:41 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:13.289 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:13.289 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:13.289 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:13.549 16:34:44 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:13.549 16:34:44 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:13.549 16:34:44 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:13.549 16:34:44 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:13.549 16:34:44 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:13.549 16:34:44 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:13.549 16:34:44 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:13.549 16:34:44 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:13.549 16:34:44 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:13.549 16:34:44 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:13.549 16:34:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:13.549 16:34:44 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2175437 00:33:13.549 16:34:44 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2175437 00:33:13.549 16:34:44 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:13.549 16:34:44 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2175437 ']' 00:33:13.549 16:34:44 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.549 16:34:44 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:13.549 16:34:44 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.549 16:34:44 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:13.549 16:34:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:13.549 [2024-11-20 16:34:44.706239] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:33:13.549 [2024-11-20 16:34:44.706283] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:13.809 [2024-11-20 16:34:44.781375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.809 [2024-11-20 16:34:44.820182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:13.809 [2024-11-20 16:34:44.820219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:13.809 [2024-11-20 16:34:44.820226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:13.809 [2024-11-20 16:34:44.820232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:13.809 [2024-11-20 16:34:44.820237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:13.809 [2024-11-20 16:34:44.820821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.809 16:34:44 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:13.809 16:34:44 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:13.809 16:34:44 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:13.809 16:34:44 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:13.809 16:34:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:13.809 16:34:44 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.809 16:34:44 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:13.809 16:34:44 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:13.809 16:34:44 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.809 16:34:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:13.809 [2024-11-20 16:34:44.964599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.809 16:34:44 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.809 16:34:44 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:13.809 16:34:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:13.809 16:34:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:13.809 16:34:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:13.809 ************************************ 00:33:13.809 START TEST fio_dif_1_default 00:33:13.809 ************************************ 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:13.809 bdev_null0 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:13.809 [2024-11-20 16:34:45.032922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:13.809 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:14.068 { 00:33:14.068 "params": { 00:33:14.068 "name": "Nvme$subsystem", 00:33:14.068 "trtype": "$TEST_TRANSPORT", 00:33:14.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:14.068 "adrfam": "ipv4", 00:33:14.068 "trsvcid": "$NVMF_PORT", 00:33:14.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:14.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:14.068 "hdgst": ${hdgst:-false}, 00:33:14.068 "ddgst": ${ddgst:-false} 00:33:14.068 }, 00:33:14.068 "method": "bdev_nvme_attach_controller" 00:33:14.068 } 00:33:14.068 EOF 00:33:14.068 )") 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:14.068 "params": { 00:33:14.068 "name": "Nvme0", 00:33:14.068 "trtype": "tcp", 00:33:14.068 "traddr": "10.0.0.2", 00:33:14.068 "adrfam": "ipv4", 00:33:14.068 "trsvcid": "4420", 00:33:14.068 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:14.068 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:14.068 "hdgst": false, 00:33:14.068 "ddgst": false 00:33:14.068 }, 00:33:14.068 "method": "bdev_nvme_attach_controller" 00:33:14.068 }' 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:14.068 16:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:14.328 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:14.328 fio-3.35 00:33:14.328 Starting 1 thread 00:33:26.560 00:33:26.561 filename0: (groupid=0, jobs=1): err= 0: pid=2175695: Wed Nov 20 16:34:56 2024 00:33:26.561 read: IOPS=200, BW=804KiB/s (823kB/s)(8048KiB/10013msec) 00:33:26.561 slat (nsec): min=5497, max=40467, avg=6210.67, stdev=1448.32 00:33:26.561 clat (usec): min=373, max=42552, avg=19887.96, stdev=20348.12 00:33:26.561 lat (usec): min=379, max=42558, avg=19894.17, stdev=20348.01 00:33:26.561 clat percentiles (usec): 00:33:26.561 | 1.00th=[ 383], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 408], 00:33:26.561 | 30.00th=[ 416], 40.00th=[ 510], 50.00th=[ 603], 60.00th=[40633], 00:33:26.561 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:26.561 | 99.00th=[41681], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:26.561 | 99.99th=[42730] 00:33:26.561 bw ( KiB/s): min= 736, max= 896, per=99.91%, avg=803.20, stdev=47.46, samples=20 00:33:26.561 iops : min= 184, max= 224, avg=200.80, stdev=11.87, samples=20 00:33:26.561 lat (usec) : 500=39.81%, 750=12.28% 00:33:26.561 lat (msec) : 4=0.20%, 50=47.71% 00:33:26.561 cpu : usr=92.35%, sys=7.41%, ctx=14, majf=0, minf=0 00:33:26.561 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:26.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:26.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:26.561 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:26.561 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:26.561 00:33:26.561 Run status group 0 (all jobs): 00:33:26.561 READ: bw=804KiB/s (823kB/s), 804KiB/s-804KiB/s (823kB/s-823kB/s), io=8048KiB (8241kB), run=10013-10013msec 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.561 00:33:26.561 real 0m11.202s 00:33:26.561 user 0m16.364s 00:33:26.561 sys 0m1.041s 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:26.561 ************************************ 00:33:26.561 END TEST fio_dif_1_default 00:33:26.561 ************************************ 00:33:26.561 16:34:56 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:26.561 16:34:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:26.561 16:34:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.561 16:34:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:26.561 ************************************ 00:33:26.561 START TEST fio_dif_1_multi_subsystems 00:33:26.561 ************************************ 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:26.561 bdev_null0 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:26.561 [2024-11-20 16:34:56.304218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:26.561 bdev_null1 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:26.561 { 00:33:26.561 "params": { 00:33:26.561 "name": "Nvme$subsystem", 00:33:26.561 "trtype": "$TEST_TRANSPORT", 00:33:26.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:26.561 "adrfam": "ipv4", 00:33:26.561 "trsvcid": "$NVMF_PORT", 00:33:26.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:26.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:26.561 "hdgst": ${hdgst:-false}, 00:33:26.561 "ddgst": ${ddgst:-false} 00:33:26.561 }, 00:33:26.561 "method": "bdev_nvme_attach_controller" 00:33:26.561 } 00:33:26.561 EOF 00:33:26.561 )") 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:26.561 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:26.562 { 00:33:26.562 "params": { 00:33:26.562 "name": "Nvme$subsystem", 00:33:26.562 "trtype": "$TEST_TRANSPORT", 00:33:26.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:26.562 "adrfam": "ipv4", 00:33:26.562 "trsvcid": "$NVMF_PORT", 00:33:26.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:26.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:26.562 "hdgst": ${hdgst:-false}, 00:33:26.562 "ddgst": ${ddgst:-false} 00:33:26.562 }, 00:33:26.562 "method": "bdev_nvme_attach_controller" 00:33:26.562 } 00:33:26.562 EOF 00:33:26.562 )") 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:26.562 "params": { 00:33:26.562 "name": "Nvme0", 00:33:26.562 "trtype": "tcp", 00:33:26.562 "traddr": "10.0.0.2", 00:33:26.562 "adrfam": "ipv4", 00:33:26.562 "trsvcid": "4420", 00:33:26.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:26.562 "hdgst": false, 00:33:26.562 "ddgst": false 00:33:26.562 }, 00:33:26.562 "method": "bdev_nvme_attach_controller" 00:33:26.562 },{ 00:33:26.562 "params": { 00:33:26.562 "name": "Nvme1", 00:33:26.562 "trtype": "tcp", 00:33:26.562 "traddr": "10.0.0.2", 00:33:26.562 "adrfam": "ipv4", 00:33:26.562 "trsvcid": "4420", 00:33:26.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:26.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:26.562 "hdgst": false, 00:33:26.562 "ddgst": false 00:33:26.562 }, 00:33:26.562 "method": "bdev_nvme_attach_controller" 00:33:26.562 }' 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:26.562 16:34:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:26.562 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:26.562 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:26.562 fio-3.35 00:33:26.562 Starting 2 threads 00:33:36.538 00:33:36.538 filename0: (groupid=0, jobs=1): err= 0: pid=2177561: Wed Nov 20 16:35:07 2024 00:33:36.538 read: IOPS=186, BW=747KiB/s (764kB/s)(7472KiB/10009msec) 00:33:36.538 slat (nsec): min=5874, max=51736, avg=8905.07, stdev=5663.48 00:33:36.538 clat (usec): min=370, max=42640, avg=21406.16, stdev=20523.77 00:33:36.538 lat (usec): min=376, max=42648, avg=21415.07, stdev=20522.31 00:33:36.538 clat percentiles (usec): 00:33:36.538 | 1.00th=[ 404], 5.00th=[ 449], 10.00th=[ 469], 20.00th=[ 570], 00:33:36.538 | 30.00th=[ 627], 40.00th=[ 644], 50.00th=[41157], 60.00th=[41157], 00:33:36.538 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:33:36.538 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:36.538 | 99.99th=[42730] 00:33:36.538 bw ( KiB/s): min= 512, max= 768, per=56.35%, avg=745.60, stdev=59.73, samples=20 00:33:36.538 iops : min= 128, max= 192, avg=186.40, stdev=14.93, samples=20 00:33:36.538 lat (usec) : 500=15.20%, 750=31.00%, 1000=2.84% 00:33:36.538 lat (msec) : 2=0.21%, 50=50.75% 00:33:36.538 cpu : usr=98.17%, sys=1.53%, ctx=28, majf=0, minf=150 00:33:36.538 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:36.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.538 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.538 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:36.538 filename1: (groupid=0, jobs=1): err= 0: pid=2177562: Wed Nov 20 16:35:07 2024 00:33:36.538 read: IOPS=143, BW=576KiB/s (590kB/s)(5760KiB/10005msec) 00:33:36.538 slat (nsec): min=5995, max=63488, avg=10041.40, stdev=7281.71 00:33:36.538 clat (usec): min=383, max=42568, avg=27761.10, stdev=19521.15 00:33:36.538 lat (usec): min=390, max=42575, avg=27771.14, stdev=19518.92 00:33:36.538 clat percentiles (usec): 00:33:36.538 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 420], 20.00th=[ 498], 00:33:36.538 | 30.00th=[ 644], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:33:36.539 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:36.539 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:36.539 | 99.99th=[42730] 00:33:36.539 bw ( KiB/s): min= 352, max= 896, per=43.27%, avg=572.63, stdev=201.51, samples=19 00:33:36.539 iops : min= 88, max= 224, avg=143.16, stdev=50.38, samples=19 00:33:36.539 lat (usec) : 500=20.49%, 750=13.33%, 1000=0.07% 00:33:36.539 lat (msec) : 50=66.11% 00:33:36.539 cpu : usr=97.31%, sys=2.43%, ctx=20, majf=0, minf=158 00:33:36.539 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:36.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.539 issued rwts: total=1440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.539 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:36.539 00:33:36.539 Run status group 0 (all jobs): 00:33:36.539 READ: bw=1322KiB/s (1354kB/s), 576KiB/s-747KiB/s (590kB/s-764kB/s), io=12.9MiB (13.5MB), run=10005-10009msec 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.539 00:33:36.539 real 0m11.339s 00:33:36.539 user 0m26.375s 00:33:36.539 sys 0m0.750s 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:36.539 16:35:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.539 ************************************ 00:33:36.539 END TEST fio_dif_1_multi_subsystems 00:33:36.539 ************************************ 00:33:36.539 16:35:07 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:36.539 16:35:07 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:36.539 16:35:07 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:36.539 16:35:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:36.539 ************************************ 00:33:36.539 START TEST fio_dif_rand_params 00:33:36.539 ************************************ 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.539 bdev_null0 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.539 [2024-11-20 16:35:07.723122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.539 { 00:33:36.539 "params": { 00:33:36.539 "name": "Nvme$subsystem", 00:33:36.539 "trtype": "$TEST_TRANSPORT", 00:33:36.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.539 "adrfam": "ipv4", 00:33:36.539 "trsvcid": "$NVMF_PORT", 00:33:36.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.539 "hdgst": ${hdgst:-false}, 00:33:36.539 "ddgst": ${ddgst:-false} 00:33:36.539 }, 00:33:36.539 "method": "bdev_nvme_attach_controller" 00:33:36.539 } 00:33:36.539 EOF 00:33:36.539 )") 00:33:36.539 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:36.540 16:35:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:36.540 "params": { 00:33:36.540 "name": "Nvme0", 00:33:36.540 "trtype": "tcp", 00:33:36.540 "traddr": "10.0.0.2", 00:33:36.540 "adrfam": "ipv4", 00:33:36.540 "trsvcid": "4420", 00:33:36.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:36.540 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:36.540 "hdgst": false, 00:33:36.540 "ddgst": false 00:33:36.540 }, 00:33:36.540 "method": "bdev_nvme_attach_controller" 00:33:36.540 }' 00:33:36.818 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:36.818 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:36.818 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.818 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.818 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:36.818 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:36.818 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:36.818 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:36.818 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:36.818 16:35:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:37.083 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:37.083 ... 00:33:37.083 fio-3.35 00:33:37.083 Starting 3 threads 00:33:43.646 00:33:43.646 filename0: (groupid=0, jobs=1): err= 0: pid=2179526: Wed Nov 20 16:35:13 2024 00:33:43.646 read: IOPS=318, BW=39.8MiB/s (41.7MB/s)(201MiB/5045msec) 00:33:43.646 slat (nsec): min=6374, max=53472, avg=17517.83, stdev=8235.06 00:33:43.646 clat (usec): min=3341, max=52833, avg=9377.19, stdev=6522.28 00:33:43.646 lat (usec): min=3349, max=52844, avg=9394.71, stdev=6522.66 00:33:43.646 clat percentiles (usec): 00:33:43.646 | 1.00th=[ 3851], 5.00th=[ 6325], 10.00th=[ 7177], 20.00th=[ 7701], 00:33:43.646 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8717], 00:33:43.646 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[10421], 00:33:43.646 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[52691], 00:33:43.646 | 99.99th=[52691] 00:33:43.646 bw ( KiB/s): min=32768, max=46848, per=34.58%, avg=41062.40, stdev=4198.19, samples=10 00:33:43.646 iops : min= 256, max= 366, avg=320.80, stdev=32.80, samples=10 00:33:43.646 lat (msec) : 4=1.37%, 10=91.10%, 20=4.98%, 50=2.05%, 100=0.50% 00:33:43.646 cpu : usr=96.39%, sys=3.25%, ctx=25, majf=0, minf=71 00:33:43.646 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:43.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.646 issued rwts: total=1606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.646 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:43.646 filename0: (groupid=0, jobs=1): err= 0: pid=2179527: Wed Nov 20 16:35:13 2024 00:33:43.646 read: IOPS=328, BW=41.1MiB/s (43.1MB/s)(207MiB/5044msec) 00:33:43.646 slat (nsec): min=5891, max=67289, avg=17761.50, stdev=8145.36 00:33:43.646 clat (usec): min=3799, max=49527, avg=9081.17, stdev=3515.37 00:33:43.646 lat (usec): min=3808, max=49535, avg=9098.94, stdev=3515.31 00:33:43.646 clat percentiles (usec): 00:33:43.646 | 1.00th=[ 4146], 5.00th=[ 5866], 10.00th=[ 6390], 20.00th=[ 7570], 00:33:43.646 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:33:43.646 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10683], 95.00th=[11338], 00:33:43.646 | 99.00th=[12518], 99.50th=[45876], 99.90th=[48497], 99.95th=[49546], 00:33:43.646 | 99.99th=[49546] 00:33:43.646 bw ( KiB/s): min=40704, max=44544, per=35.70%, avg=42393.60, stdev=1403.21, samples=10 00:33:43.646 iops : min= 318, max= 348, avg=331.20, stdev=10.96, samples=10 00:33:43.646 lat (msec) : 4=0.78%, 10=75.93%, 20=22.62%, 50=0.66% 00:33:43.646 cpu : usr=96.23%, sys=3.45%, ctx=8, majf=0, minf=194 00:33:43.646 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:43.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.646 issued rwts: total=1658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.646 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:43.646 filename0: (groupid=0, jobs=1): err= 0: pid=2179528: Wed Nov 20 16:35:13 2024 00:33:43.646 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(177MiB/5045msec) 00:33:43.646 slat (nsec): min=6001, max=40226, avg=19382.25, stdev=6352.87 00:33:43.646 clat (usec): min=3878, max=53493, avg=10664.45, stdev=5397.77 00:33:43.646 lat (usec): min=3884, max=53503, avg=10683.83, stdev=5397.52 00:33:43.646 clat percentiles (usec): 00:33:43.646 | 1.00th=[ 5473], 5.00th=[ 6390], 10.00th=[ 7111], 20.00th=[ 8717], 00:33:43.646 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10683], 00:33:43.646 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12125], 95.00th=[12649], 00:33:43.646 | 99.00th=[48497], 99.50th=[50594], 99.90th=[52691], 99.95th=[53740], 00:33:43.646 | 99.99th=[53740] 00:33:43.646 bw ( KiB/s): min=31744, max=38656, per=30.46%, avg=36172.80, stdev=1865.86, samples=10 00:33:43.646 iops : min= 248, max= 302, avg=282.60, stdev=14.58, samples=10 00:33:43.646 lat (msec) : 4=0.14%, 10=42.37%, 20=55.79%, 50=1.13%, 100=0.56% 00:33:43.646 cpu : usr=96.55%, sys=3.07%, ctx=39, majf=0, minf=49 00:33:43.646 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:43.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.646 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.646 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:43.646 00:33:43.646 Run status group 0 (all jobs): 00:33:43.646 READ: bw=116MiB/s (122MB/s), 35.1MiB/s-41.1MiB/s (36.8MB/s-43.1MB/s), io=585MiB (613MB), run=5044-5045msec 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.646 bdev_null0 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.646 16:35:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.646 [2024-11-20 16:35:13.998334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.646 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.647 bdev_null1 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.647 bdev_null2 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:43.647 { 00:33:43.647 "params": { 00:33:43.647 "name": "Nvme$subsystem", 00:33:43.647 "trtype": "$TEST_TRANSPORT", 00:33:43.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.647 "adrfam": "ipv4", 00:33:43.647 "trsvcid": "$NVMF_PORT", 00:33:43.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.647 "hdgst": ${hdgst:-false}, 00:33:43.647 "ddgst": ${ddgst:-false} 00:33:43.647 }, 00:33:43.647 "method": "bdev_nvme_attach_controller" 00:33:43.647 } 00:33:43.647 EOF 00:33:43.647 )") 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:43.647 { 00:33:43.647 "params": { 00:33:43.647 "name": "Nvme$subsystem", 00:33:43.647 "trtype": "$TEST_TRANSPORT", 00:33:43.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.647 "adrfam": "ipv4", 00:33:43.647 "trsvcid": "$NVMF_PORT", 00:33:43.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.647 "hdgst": ${hdgst:-false}, 00:33:43.647 "ddgst": ${ddgst:-false} 00:33:43.647 }, 00:33:43.647 "method": "bdev_nvme_attach_controller" 00:33:43.647 } 00:33:43.647 EOF 00:33:43.647 )") 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:43.647 { 00:33:43.647 "params": { 00:33:43.647 "name": "Nvme$subsystem", 00:33:43.647 "trtype": "$TEST_TRANSPORT", 00:33:43.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.647 "adrfam": "ipv4", 00:33:43.647 "trsvcid": "$NVMF_PORT", 00:33:43.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.647 "hdgst": ${hdgst:-false}, 00:33:43.647 "ddgst": ${ddgst:-false} 00:33:43.647 }, 00:33:43.647 "method": "bdev_nvme_attach_controller" 00:33:43.647 } 00:33:43.647 EOF 00:33:43.647 )") 00:33:43.647 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:43.648 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:43.648 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:43.648 16:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:43.648 "params": { 00:33:43.648 "name": "Nvme0", 00:33:43.648 "trtype": "tcp", 00:33:43.648 "traddr": "10.0.0.2", 00:33:43.648 "adrfam": "ipv4", 00:33:43.648 "trsvcid": "4420", 00:33:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:43.648 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:43.648 "hdgst": false, 00:33:43.648 "ddgst": false 00:33:43.648 }, 00:33:43.648 "method": "bdev_nvme_attach_controller" 00:33:43.648 },{ 00:33:43.648 "params": { 00:33:43.648 "name": "Nvme1", 00:33:43.648 "trtype": "tcp", 00:33:43.648 "traddr": "10.0.0.2", 00:33:43.648 "adrfam": "ipv4", 00:33:43.648 "trsvcid": "4420", 00:33:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:43.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:43.648 "hdgst": false, 00:33:43.648 "ddgst": false 00:33:43.648 }, 00:33:43.648 "method": "bdev_nvme_attach_controller" 00:33:43.648 },{ 00:33:43.648 "params": { 00:33:43.648 "name": "Nvme2", 00:33:43.648 "trtype": "tcp", 00:33:43.648 "traddr": "10.0.0.2", 00:33:43.648 "adrfam": "ipv4", 00:33:43.648 "trsvcid": "4420", 00:33:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:43.648 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:43.648 "hdgst": false, 00:33:43.648 "ddgst": false 00:33:43.648 }, 00:33:43.648 "method": "bdev_nvme_attach_controller" 00:33:43.648 }' 00:33:43.648 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:43.648 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:43.648 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.648 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.648 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:43.648 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:43.648 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:43.648 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:43.648 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:43.648 16:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.648 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:43.648 ... 00:33:43.648 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:43.648 ... 00:33:43.648 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:43.648 ... 00:33:43.648 fio-3.35 00:33:43.648 Starting 24 threads 00:33:55.839 00:33:55.839 filename0: (groupid=0, jobs=1): err= 0: pid=2180791: Wed Nov 20 16:35:25 2024 00:33:55.839 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10005msec) 00:33:55.839 slat (usec): min=4, max=120, avg=42.10, stdev=24.50 00:33:55.839 clat (usec): min=10976, max=51988, avg=30027.41, stdev=1647.97 00:33:55.839 lat (usec): min=10986, max=52009, avg=30069.51, stdev=1647.42 00:33:55.839 clat percentiles (usec): 00:33:55.839 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:33:55.839 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:33:55.839 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.839 | 99.00th=[31327], 99.50th=[31589], 99.90th=[52167], 99.95th=[52167], 00:33:55.839 | 99.99th=[52167] 00:33:55.839 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2099.35, stdev=76.21, samples=20 00:33:55.839 iops : min= 480, max= 544, avg=524.80, stdev=19.14, samples=20 00:33:55.839 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:33:55.839 cpu : usr=98.68%, sys=0.91%, ctx=12, majf=0, minf=9 00:33:55.839 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:55.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.839 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.839 filename0: (groupid=0, jobs=1): err= 0: pid=2180792: Wed Nov 20 16:35:25 2024 00:33:55.839 read: IOPS=528, BW=2112KiB/s (2163kB/s)(20.6MiB/10010msec) 00:33:55.839 slat (usec): min=5, max=129, avg=33.06, stdev=23.79 00:33:55.839 clat (usec): min=13920, max=37849, avg=30026.93, stdev=1302.13 00:33:55.839 lat (usec): min=13928, max=37868, avg=30059.99, stdev=1300.29 00:33:55.839 clat percentiles (usec): 00:33:55.839 | 1.00th=[26346], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:33:55.839 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:55.839 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30802], 95.00th=[31065], 00:33:55.839 | 99.00th=[31589], 99.50th=[33424], 99.90th=[37487], 99.95th=[37487], 00:33:55.839 | 99.99th=[38011] 00:33:55.839 bw ( KiB/s): min= 2048, max= 2224, per=4.17%, avg=2111.16, stdev=67.77, samples=19 00:33:55.839 iops : min= 512, max= 556, avg=527.79, stdev=16.94, samples=19 00:33:55.839 lat (msec) : 20=0.53%, 50=99.47% 00:33:55.839 cpu : usr=98.77%, sys=0.84%, ctx=12, majf=0, minf=9 00:33:55.839 IO depths : 1=5.6%, 2=11.8%, 4=24.7%, 8=51.0%, 16=6.9%, 32=0.0%, >=64=0.0% 00:33:55.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.839 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.839 issued rwts: total=5286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.839 filename0: (groupid=0, jobs=1): err= 0: pid=2180793: Wed Nov 20 16:35:25 2024 00:33:55.839 read: IOPS=529, BW=2119KiB/s (2170kB/s)(20.8MiB/10027msec) 00:33:55.839 slat (usec): min=6, max=294, avg=24.70, stdev=20.79 00:33:55.839 clat (usec): min=4815, max=32813, avg=29987.22, stdev=1714.77 00:33:55.839 lat (usec): min=4999, max=32831, avg=30011.92, stdev=1705.52 00:33:55.839 clat percentiles (usec): 00:33:55.839 | 1.00th=[24511], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:55.839 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:55.839 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30802], 95.00th=[30802], 00:33:55.839 | 99.00th=[31589], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:33:55.839 | 99.99th=[32900] 00:33:55.839 bw ( KiB/s): min= 2048, max= 2308, per=4.18%, avg=2118.60, stdev=77.92, samples=20 00:33:55.839 iops : min= 512, max= 577, avg=529.65, stdev=19.48, samples=20 00:33:55.839 lat (msec) : 10=0.30%, 20=0.56%, 50=99.13% 00:33:55.839 cpu : usr=98.54%, sys=1.06%, ctx=10, majf=0, minf=9 00:33:55.839 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:55.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.839 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.839 filename0: (groupid=0, jobs=1): err= 0: pid=2180794: Wed Nov 20 16:35:25 2024 00:33:55.839 read: IOPS=530, BW=2124KiB/s (2175kB/s)(20.8MiB/10028msec) 00:33:55.839 slat (usec): min=7, max=123, avg=36.95, stdev=22.71 00:33:55.839 clat (usec): min=7453, max=43237, avg=29827.99, stdev=2118.38 00:33:55.839 lat (usec): min=7464, max=43247, avg=29864.94, stdev=2119.88 00:33:55.839 clat percentiles (usec): 00:33:55.839 | 1.00th=[13960], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:33:55.839 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:55.839 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.839 | 99.00th=[32113], 99.50th=[32113], 99.90th=[34341], 99.95th=[43254], 00:33:55.839 | 99.99th=[43254] 00:33:55.839 bw ( KiB/s): min= 2048, max= 2308, per=4.19%, avg=2123.40, stdev=84.34, samples=20 00:33:55.839 iops : min= 512, max= 577, avg=530.85, stdev=21.08, samples=20 00:33:55.839 lat (msec) : 10=0.11%, 20=1.35%, 50=98.53% 00:33:55.839 cpu : usr=98.62%, sys=0.99%, ctx=14, majf=0, minf=9 00:33:55.839 IO depths : 1=5.1%, 2=11.2%, 4=24.6%, 8=51.7%, 16=7.5%, 32=0.0%, >=64=0.0% 00:33:55.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.839 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.839 issued rwts: total=5324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.839 filename0: (groupid=0, jobs=1): err= 0: pid=2180795: Wed Nov 20 16:35:25 2024 00:33:55.839 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10003msec) 00:33:55.839 slat (usec): min=4, max=109, avg=30.99, stdev=21.01 00:33:55.839 clat (usec): min=17156, max=43388, avg=30119.21, stdev=813.86 00:33:55.839 lat (usec): min=17165, max=43404, avg=30150.20, stdev=810.01 00:33:55.839 clat percentiles (usec): 00:33:55.839 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:33:55.839 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:55.839 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.839 | 99.00th=[31589], 99.50th=[32113], 99.90th=[40633], 99.95th=[40633], 00:33:55.839 | 99.99th=[43254] 00:33:55.839 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2101.89, stdev=64.93, samples=19 00:33:55.839 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:33:55.839 lat (msec) : 20=0.04%, 50=99.96% 00:33:55.839 cpu : usr=98.43%, sys=1.19%, ctx=17, majf=0, minf=9 00:33:55.839 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:55.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.839 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.839 filename0: (groupid=0, jobs=1): err= 0: pid=2180796: Wed Nov 20 16:35:25 2024 00:33:55.839 read: IOPS=527, BW=2109KiB/s (2160kB/s)(20.6MiB/10014msec) 00:33:55.839 slat (usec): min=5, max=117, avg=30.26, stdev=22.07 00:33:55.839 clat (usec): min=15918, max=44709, avg=30095.87, stdev=1545.01 00:33:55.839 lat (usec): min=15927, max=44725, avg=30126.13, stdev=1545.00 00:33:55.839 clat percentiles (usec): 00:33:55.839 | 1.00th=[25035], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:33:55.839 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:55.839 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30802], 95.00th=[31065], 00:33:55.839 | 99.00th=[32375], 99.50th=[39060], 99.90th=[44827], 99.95th=[44827], 00:33:55.839 | 99.99th=[44827] 00:33:55.839 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2110.25, stdev=62.84, samples=20 00:33:55.839 iops : min= 512, max= 544, avg=527.55, stdev=15.70, samples=20 00:33:55.839 lat (msec) : 20=0.34%, 50=99.66% 00:33:55.839 cpu : usr=98.43%, sys=1.17%, ctx=16, majf=0, minf=9 00:33:55.840 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:55.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.840 filename0: (groupid=0, jobs=1): err= 0: pid=2180797: Wed Nov 20 16:35:25 2024 00:33:55.840 read: IOPS=537, BW=2152KiB/s (2203kB/s)(21.1MiB/10024msec) 00:33:55.840 slat (nsec): min=6530, max=76280, avg=20015.86, stdev=13546.78 00:33:55.840 clat (usec): min=1374, max=50504, avg=29579.59, stdev=4045.11 00:33:55.840 lat (usec): min=1387, max=50513, avg=29599.60, stdev=4045.46 00:33:55.840 clat percentiles (usec): 00:33:55.840 | 1.00th=[ 3032], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:55.840 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:55.840 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.840 | 99.00th=[31589], 99.50th=[31851], 99.90th=[50594], 99.95th=[50594], 00:33:55.840 | 99.99th=[50594] 00:33:55.840 bw ( KiB/s): min= 2048, max= 2944, per=4.25%, avg=2150.40, stdev=197.43, samples=20 00:33:55.840 iops : min= 512, max= 736, avg=537.60, stdev=49.36, samples=20 00:33:55.840 lat (msec) : 2=0.89%, 4=0.30%, 10=0.70%, 20=1.22%, 50=96.74% 00:33:55.840 lat (msec) : 100=0.15% 00:33:55.840 cpu : usr=98.87%, sys=0.82%, ctx=15, majf=0, minf=9 00:33:55.840 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:55.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.840 filename0: (groupid=0, jobs=1): err= 0: pid=2180798: Wed Nov 20 16:35:25 2024 00:33:55.840 read: IOPS=526, BW=2108KiB/s (2158kB/s)(20.6MiB/10021msec) 00:33:55.840 slat (usec): min=7, max=125, avg=38.18, stdev=23.37 00:33:55.840 clat (usec): min=18137, max=32579, avg=30015.45, stdev=700.15 00:33:55.840 lat (usec): min=18153, max=32607, avg=30053.64, stdev=700.53 00:33:55.840 clat percentiles (usec): 00:33:55.840 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:33:55.840 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:33:55.840 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.840 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32375], 99.95th=[32375], 00:33:55.840 | 99.99th=[32637] 00:33:55.840 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2105.60, stdev=65.33, samples=20 00:33:55.840 iops : min= 512, max= 544, avg=526.40, stdev=16.33, samples=20 00:33:55.840 lat (msec) : 20=0.13%, 50=99.87% 00:33:55.840 cpu : usr=98.75%, sys=0.86%, ctx=9, majf=0, minf=9 00:33:55.840 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:55.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.840 filename1: (groupid=0, jobs=1): err= 0: pid=2180799: Wed Nov 20 16:35:25 2024 00:33:55.840 read: IOPS=530, BW=2121KiB/s (2172kB/s)(20.8MiB/10028msec) 00:33:55.840 slat (usec): min=6, max=121, avg=39.10, stdev=23.38 00:33:55.840 clat (usec): min=6742, max=33092, avg=29823.40, stdev=1962.66 00:33:55.840 lat (usec): min=6752, max=33101, avg=29862.50, stdev=1964.36 00:33:55.840 clat percentiles (usec): 00:33:55.840 | 1.00th=[17695], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:33:55.840 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:33:55.840 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.840 | 99.00th=[31589], 99.50th=[32113], 99.90th=[32900], 99.95th=[33162], 00:33:55.840 | 99.99th=[33162] 00:33:55.840 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2120.80, stdev=80.00, samples=20 00:33:55.840 iops : min= 512, max= 576, avg=530.20, stdev=20.00, samples=20 00:33:55.840 lat (msec) : 10=0.15%, 20=1.17%, 50=98.68% 00:33:55.840 cpu : usr=98.65%, sys=0.96%, ctx=10, majf=0, minf=9 00:33:55.840 IO depths : 1=5.7%, 2=11.8%, 4=24.6%, 8=51.1%, 16=6.8%, 32=0.0%, >=64=0.0% 00:33:55.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 issued rwts: total=5318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.840 filename1: (groupid=0, jobs=1): err= 0: pid=2180800: Wed Nov 20 16:35:25 2024 00:33:55.840 read: IOPS=530, BW=2122KiB/s (2173kB/s)(20.7MiB/10001msec) 00:33:55.840 slat (usec): min=4, max=123, avg=36.00, stdev=22.38 00:33:55.840 clat (usec): min=14953, max=51707, avg=29864.45, stdev=2310.60 00:33:55.840 lat (usec): min=14969, max=51722, avg=29900.46, stdev=2311.65 00:33:55.840 clat percentiles (usec): 00:33:55.840 | 1.00th=[19530], 5.00th=[28443], 10.00th=[29492], 20.00th=[29754], 00:33:55.840 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:33:55.840 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[31065], 00:33:55.840 | 99.00th=[35390], 99.50th=[39584], 99.90th=[51643], 99.95th=[51643], 00:33:55.840 | 99.99th=[51643] 00:33:55.840 bw ( KiB/s): min= 1920, max= 2456, per=4.19%, avg=2119.16, stdev=109.78, samples=19 00:33:55.840 iops : min= 480, max= 614, avg=529.79, stdev=27.45, samples=19 00:33:55.840 lat (msec) : 20=1.56%, 50=98.13%, 100=0.30% 00:33:55.840 cpu : usr=98.33%, sys=1.25%, ctx=41, majf=0, minf=9 00:33:55.840 IO depths : 1=4.7%, 2=10.4%, 4=23.4%, 8=53.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:33:55.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 issued rwts: total=5305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.840 filename1: (groupid=0, jobs=1): err= 0: pid=2180801: Wed Nov 20 16:35:25 2024 00:33:55.840 read: IOPS=529, BW=2119KiB/s (2170kB/s)(20.7MiB/10006msec) 00:33:55.840 slat (usec): min=4, max=127, avg=36.83, stdev=25.73 00:33:55.840 clat (usec): min=10366, max=46055, avg=29856.56, stdev=2300.79 00:33:55.840 lat (usec): min=10377, max=46076, avg=29893.40, stdev=2301.35 00:33:55.840 clat percentiles (usec): 00:33:55.840 | 1.00th=[19792], 5.00th=[27919], 10.00th=[29492], 20.00th=[29492], 00:33:55.840 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:33:55.840 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[31327], 00:33:55.840 | 99.00th=[38536], 99.50th=[40109], 99.90th=[45876], 99.95th=[45876], 00:33:55.840 | 99.99th=[45876] 00:33:55.840 bw ( KiB/s): min= 1920, max= 2288, per=4.17%, avg=2113.60, stdev=84.01, samples=20 00:33:55.840 iops : min= 480, max= 572, avg=528.40, stdev=21.00, samples=20 00:33:55.840 lat (msec) : 20=1.06%, 50=98.94% 00:33:55.840 cpu : usr=98.60%, sys=1.02%, ctx=11, majf=0, minf=9 00:33:55.840 IO depths : 1=5.2%, 2=10.5%, 4=21.7%, 8=54.8%, 16=7.8%, 32=0.0%, >=64=0.0% 00:33:55.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 complete : 0=0.0%, 4=93.3%, 8=1.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 issued rwts: total=5300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.840 filename1: (groupid=0, jobs=1): err= 0: pid=2180802: Wed Nov 20 16:35:25 2024 00:33:55.840 read: IOPS=525, BW=2101KiB/s (2151kB/s)(20.6MiB/10046msec) 00:33:55.840 slat (usec): min=4, max=115, avg=33.36, stdev=22.89 00:33:55.840 clat (usec): min=10469, max=46202, avg=30061.96, stdev=1821.68 00:33:55.840 lat (usec): min=10497, max=46215, avg=30095.32, stdev=1819.86 00:33:55.840 clat percentiles (usec): 00:33:55.840 | 1.00th=[23462], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:33:55.840 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:55.840 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30802], 95.00th=[31065], 00:33:55.840 | 99.00th=[36963], 99.50th=[38011], 99.90th=[45876], 99.95th=[45876], 00:33:55.840 | 99.99th=[46400] 00:33:55.840 bw ( KiB/s): min= 1923, max= 2176, per=4.16%, avg=2104.15, stdev=75.83, samples=20 00:33:55.840 iops : min= 480, max= 544, avg=526.00, stdev=19.05, samples=20 00:33:55.840 lat (msec) : 20=0.42%, 50=99.58% 00:33:55.840 cpu : usr=98.82%, sys=0.80%, ctx=14, majf=0, minf=9 00:33:55.840 IO depths : 1=4.7%, 2=10.9%, 4=24.7%, 8=51.9%, 16=7.8%, 32=0.0%, >=64=0.0% 00:33:55.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 issued rwts: total=5276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.840 filename1: (groupid=0, jobs=1): err= 0: pid=2180803: Wed Nov 20 16:35:25 2024 00:33:55.840 read: IOPS=526, BW=2104KiB/s (2155kB/s)(20.6MiB/10006msec) 00:33:55.840 slat (usec): min=7, max=131, avg=47.29, stdev=24.89 00:33:55.840 clat (usec): min=27793, max=33521, avg=29986.22, stdev=473.69 00:33:55.840 lat (usec): min=27810, max=33534, avg=30033.51, stdev=471.73 00:33:55.840 clat percentiles (usec): 00:33:55.840 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:33:55.840 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:33:55.840 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.840 | 99.00th=[31327], 99.50th=[31851], 99.90th=[33424], 99.95th=[33424], 00:33:55.840 | 99.99th=[33424] 00:33:55.840 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2101.89, stdev=64.93, samples=19 00:33:55.840 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:33:55.840 lat (msec) : 50=100.00% 00:33:55.840 cpu : usr=98.58%, sys=1.03%, ctx=13, majf=0, minf=9 00:33:55.840 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:55.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.840 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.840 filename1: (groupid=0, jobs=1): err= 0: pid=2180804: Wed Nov 20 16:35:25 2024 00:33:55.840 read: IOPS=529, BW=2119KiB/s (2170kB/s)(20.8MiB/10027msec) 00:33:55.840 slat (usec): min=7, max=129, avg=35.12, stdev=21.25 00:33:55.840 clat (usec): min=11102, max=39381, avg=29922.38, stdev=1987.48 00:33:55.840 lat (usec): min=11124, max=39417, avg=29957.49, stdev=1988.53 00:33:55.840 clat percentiles (usec): 00:33:55.841 | 1.00th=[17957], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:33:55.841 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:55.841 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.841 | 99.00th=[32113], 99.50th=[35914], 99.90th=[38011], 99.95th=[38011], 00:33:55.841 | 99.99th=[39584] 00:33:55.841 bw ( KiB/s): min= 2048, max= 2308, per=4.18%, avg=2118.60, stdev=75.46, samples=20 00:33:55.841 iops : min= 512, max= 577, avg=529.65, stdev=18.87, samples=20 00:33:55.841 lat (msec) : 20=1.20%, 50=98.80% 00:33:55.841 cpu : usr=98.67%, sys=0.94%, ctx=12, majf=0, minf=9 00:33:55.841 IO depths : 1=4.6%, 2=10.8%, 4=24.9%, 8=51.8%, 16=7.9%, 32=0.0%, >=64=0.0% 00:33:55.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.841 filename1: (groupid=0, jobs=1): err= 0: pid=2180805: Wed Nov 20 16:35:25 2024 00:33:55.841 read: IOPS=526, BW=2108KiB/s (2159kB/s)(20.6MiB/10019msec) 00:33:55.841 slat (usec): min=7, max=122, avg=37.61, stdev=25.27 00:33:55.841 clat (usec): min=16187, max=38300, avg=30020.48, stdev=818.35 00:33:55.841 lat (usec): min=16198, max=38327, avg=30058.09, stdev=817.40 00:33:55.841 clat percentiles (usec): 00:33:55.841 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:33:55.841 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:55.841 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.841 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31589], 99.95th=[31851], 00:33:55.841 | 99.99th=[38536] 00:33:55.841 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2105.60, stdev=65.33, samples=20 00:33:55.841 iops : min= 512, max= 544, avg=526.40, stdev=16.33, samples=20 00:33:55.841 lat (msec) : 20=0.30%, 50=99.70% 00:33:55.841 cpu : usr=98.75%, sys=0.86%, ctx=13, majf=0, minf=9 00:33:55.841 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:55.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.841 filename1: (groupid=0, jobs=1): err= 0: pid=2180806: Wed Nov 20 16:35:25 2024 00:33:55.841 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10001msec) 00:33:55.841 slat (usec): min=5, max=129, avg=46.18, stdev=25.16 00:33:55.841 clat (usec): min=22311, max=58194, avg=30036.91, stdev=1414.35 00:33:55.841 lat (usec): min=22318, max=58210, avg=30083.09, stdev=1413.09 00:33:55.841 clat percentiles (usec): 00:33:55.841 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:33:55.841 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:33:55.841 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.841 | 99.00th=[31589], 99.50th=[35914], 99.90th=[51643], 99.95th=[51643], 00:33:55.841 | 99.99th=[57934] 00:33:55.841 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2095.16, stdev=73.46, samples=19 00:33:55.841 iops : min= 480, max= 544, avg=523.79, stdev=18.37, samples=19 00:33:55.841 lat (msec) : 50=99.70%, 100=0.30% 00:33:55.841 cpu : usr=98.67%, sys=0.95%, ctx=14, majf=0, minf=9 00:33:55.841 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:55.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.841 filename2: (groupid=0, jobs=1): err= 0: pid=2180807: Wed Nov 20 16:35:25 2024 00:33:55.841 read: IOPS=529, BW=2119KiB/s (2170kB/s)(20.8MiB/10028msec) 00:33:55.841 slat (usec): min=6, max=130, avg=38.90, stdev=22.83 00:33:55.841 clat (usec): min=8798, max=32614, avg=29851.24, stdev=1798.04 00:33:55.841 lat (usec): min=8808, max=32638, avg=29890.14, stdev=1799.48 00:33:55.841 clat percentiles (usec): 00:33:55.841 | 1.00th=[17957], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:33:55.841 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:33:55.841 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.841 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:33:55.841 | 99.99th=[32637] 00:33:55.841 bw ( KiB/s): min= 2048, max= 2308, per=4.18%, avg=2118.60, stdev=77.92, samples=20 00:33:55.841 iops : min= 512, max= 577, avg=529.65, stdev=19.48, samples=20 00:33:55.841 lat (msec) : 10=0.08%, 20=1.13%, 50=98.80% 00:33:55.841 cpu : usr=98.56%, sys=1.06%, ctx=10, majf=0, minf=9 00:33:55.841 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:55.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.841 filename2: (groupid=0, jobs=1): err= 0: pid=2180808: Wed Nov 20 16:35:25 2024 00:33:55.841 read: IOPS=527, BW=2109KiB/s (2160kB/s)(20.6MiB/10006msec) 00:33:55.841 slat (usec): min=4, max=123, avg=36.52, stdev=24.96 00:33:55.841 clat (usec): min=10357, max=45435, avg=29976.41, stdev=1655.26 00:33:55.841 lat (usec): min=10365, max=45451, avg=30012.93, stdev=1654.55 00:33:55.841 clat percentiles (usec): 00:33:55.841 | 1.00th=[26346], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:33:55.841 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:33:55.841 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[31065], 00:33:55.841 | 99.00th=[31589], 99.50th=[33424], 99.90th=[45351], 99.95th=[45351], 00:33:55.841 | 99.99th=[45351] 00:33:55.841 bw ( KiB/s): min= 1923, max= 2176, per=4.16%, avg=2104.15, stdev=75.83, samples=20 00:33:55.841 iops : min= 480, max= 544, avg=526.00, stdev=19.05, samples=20 00:33:55.841 lat (msec) : 20=0.42%, 50=99.58% 00:33:55.841 cpu : usr=98.52%, sys=1.10%, ctx=13, majf=0, minf=9 00:33:55.841 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:55.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 issued rwts: total=5276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.841 filename2: (groupid=0, jobs=1): err= 0: pid=2180809: Wed Nov 20 16:35:25 2024 00:33:55.841 read: IOPS=526, BW=2108KiB/s (2158kB/s)(20.6MiB/10002msec) 00:33:55.841 slat (usec): min=4, max=132, avg=46.39, stdev=25.40 00:33:55.841 clat (usec): min=18823, max=53149, avg=29955.27, stdev=1707.20 00:33:55.841 lat (usec): min=18831, max=53163, avg=30001.67, stdev=1706.86 00:33:55.841 clat percentiles (usec): 00:33:55.841 | 1.00th=[22938], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:33:55.841 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:33:55.841 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.841 | 99.00th=[31589], 99.50th=[35914], 99.90th=[53216], 99.95th=[53216], 00:33:55.841 | 99.99th=[53216] 00:33:55.841 bw ( KiB/s): min= 1923, max= 2224, per=4.16%, avg=2104.58, stdev=80.57, samples=19 00:33:55.841 iops : min= 480, max= 556, avg=526.11, stdev=20.24, samples=19 00:33:55.841 lat (msec) : 20=0.47%, 50=99.22%, 100=0.30% 00:33:55.841 cpu : usr=98.69%, sys=0.92%, ctx=53, majf=0, minf=9 00:33:55.841 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:55.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 issued rwts: total=5270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.841 filename2: (groupid=0, jobs=1): err= 0: pid=2180811: Wed Nov 20 16:35:25 2024 00:33:55.841 read: IOPS=526, BW=2104KiB/s (2155kB/s)(20.6MiB/10006msec) 00:33:55.841 slat (usec): min=7, max=123, avg=42.27, stdev=22.81 00:33:55.841 clat (usec): min=19995, max=45570, avg=30048.05, stdev=626.73 00:33:55.841 lat (usec): min=20003, max=45584, avg=30090.31, stdev=624.37 00:33:55.841 clat percentiles (usec): 00:33:55.841 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:33:55.841 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:33:55.841 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.841 | 99.00th=[31327], 99.50th=[31851], 99.90th=[33424], 99.95th=[38011], 00:33:55.841 | 99.99th=[45351] 00:33:55.841 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2101.89, stdev=64.93, samples=19 00:33:55.841 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:33:55.841 lat (msec) : 20=0.02%, 50=99.98% 00:33:55.841 cpu : usr=98.62%, sys=0.99%, ctx=12, majf=0, minf=9 00:33:55.841 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:55.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.841 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.841 filename2: (groupid=0, jobs=1): err= 0: pid=2180812: Wed Nov 20 16:35:25 2024 00:33:55.841 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.7MiB/10005msec) 00:33:55.841 slat (usec): min=5, max=131, avg=36.23, stdev=25.63 00:33:55.841 clat (usec): min=10372, max=45295, avg=29807.88, stdev=2495.23 00:33:55.841 lat (usec): min=10396, max=45312, avg=29844.11, stdev=2496.51 00:33:55.841 clat percentiles (usec): 00:33:55.841 | 1.00th=[18744], 5.00th=[28181], 10.00th=[29492], 20.00th=[29492], 00:33:55.841 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:33:55.841 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.841 | 99.00th=[38011], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:33:55.841 | 99.99th=[45351] 00:33:55.841 bw ( KiB/s): min= 1923, max= 2272, per=4.18%, avg=2117.75, stdev=79.40, samples=20 00:33:55.841 iops : min= 480, max= 568, avg=529.40, stdev=19.95, samples=20 00:33:55.841 lat (msec) : 20=1.69%, 50=98.31% 00:33:55.842 cpu : usr=98.35%, sys=1.26%, ctx=8, majf=0, minf=9 00:33:55.842 IO depths : 1=4.5%, 2=9.5%, 4=20.5%, 8=56.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:33:55.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.842 complete : 0=0.0%, 4=93.2%, 8=1.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.842 issued rwts: total=5310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.842 filename2: (groupid=0, jobs=1): err= 0: pid=2180813: Wed Nov 20 16:35:25 2024 00:33:55.842 read: IOPS=534, BW=2140KiB/s (2191kB/s)(21.0MiB/10029msec) 00:33:55.842 slat (usec): min=7, max=126, avg=34.97, stdev=23.01 00:33:55.842 clat (usec): min=7384, max=42580, avg=29598.04, stdev=2867.62 00:33:55.842 lat (usec): min=7394, max=42591, avg=29633.01, stdev=2870.30 00:33:55.842 clat percentiles (usec): 00:33:55.842 | 1.00th=[12780], 5.00th=[27919], 10.00th=[29492], 20.00th=[29754], 00:33:55.842 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:55.842 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:55.842 | 99.00th=[32113], 99.50th=[32375], 99.90th=[39584], 99.95th=[42730], 00:33:55.842 | 99.99th=[42730] 00:33:55.842 bw ( KiB/s): min= 2048, max= 2424, per=4.23%, avg=2142.20, stdev=105.82, samples=20 00:33:55.842 iops : min= 512, max= 606, avg=535.55, stdev=26.45, samples=20 00:33:55.842 lat (msec) : 10=0.58%, 20=2.18%, 50=97.24% 00:33:55.842 cpu : usr=98.31%, sys=1.28%, ctx=36, majf=0, minf=9 00:33:55.842 IO depths : 1=5.3%, 2=11.1%, 4=23.7%, 8=52.6%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:55.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.842 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.842 issued rwts: total=5365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.842 filename2: (groupid=0, jobs=1): err= 0: pid=2180814: Wed Nov 20 16:35:25 2024 00:33:55.842 read: IOPS=536, BW=2148KiB/s (2199kB/s)(21.0MiB/10004msec) 00:33:55.842 slat (usec): min=6, max=119, avg=37.42, stdev=20.07 00:33:55.842 clat (usec): min=10486, max=74034, avg=29502.51, stdev=4085.51 00:33:55.842 lat (usec): min=10497, max=74075, avg=29539.93, stdev=4090.00 00:33:55.842 clat percentiles (usec): 00:33:55.842 | 1.00th=[18482], 5.00th=[20317], 10.00th=[26870], 20.00th=[29492], 00:33:55.842 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:33:55.842 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[31327], 00:33:55.842 | 99.00th=[44827], 99.50th=[45876], 99.90th=[60556], 99.95th=[73925], 00:33:55.842 | 99.99th=[73925] 00:33:55.842 bw ( KiB/s): min= 1792, max= 2448, per=4.23%, avg=2140.63, stdev=139.16, samples=19 00:33:55.842 iops : min= 448, max= 612, avg=535.16, stdev=34.79, samples=19 00:33:55.842 lat (msec) : 20=4.32%, 50=95.38%, 100=0.30% 00:33:55.842 cpu : usr=98.78%, sys=0.84%, ctx=29, majf=0, minf=9 00:33:55.842 IO depths : 1=3.3%, 2=7.7%, 4=19.4%, 8=59.7%, 16=9.9%, 32=0.0%, >=64=0.0% 00:33:55.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.842 complete : 0=0.0%, 4=92.8%, 8=2.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.842 issued rwts: total=5372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.842 filename2: (groupid=0, jobs=1): err= 0: pid=2180815: Wed Nov 20 16:35:25 2024 00:33:55.842 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10005msec) 00:33:55.842 slat (usec): min=7, max=134, avg=20.41, stdev= 8.78 00:33:55.842 clat (usec): min=10678, max=61482, avg=30228.89, stdev=2067.81 00:33:55.842 lat (usec): min=10685, max=61517, avg=30249.30, stdev=2068.59 00:33:55.842 clat percentiles (usec): 00:33:55.842 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:55.842 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:55.842 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30802], 95.00th=[30802], 00:33:55.842 | 99.00th=[31589], 99.50th=[32375], 99.90th=[61080], 99.95th=[61604], 00:33:55.842 | 99.99th=[61604] 00:33:55.842 bw ( KiB/s): min= 1923, max= 2232, per=4.15%, avg=2102.15, stdev=80.10, samples=20 00:33:55.842 iops : min= 480, max= 558, avg=525.50, stdev=20.11, samples=20 00:33:55.842 lat (msec) : 20=0.34%, 50=99.35%, 100=0.30% 00:33:55.842 cpu : usr=98.52%, sys=1.09%, ctx=21, majf=0, minf=9 00:33:55.842 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:55.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.842 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.842 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:55.842 00:33:55.842 Run status group 0 (all jobs): 00:33:55.842 READ: bw=49.4MiB/s (51.8MB/s), 2099KiB/s-2152KiB/s (2149kB/s-2203kB/s), io=497MiB (521MB), run=10001-10046msec 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.842 bdev_null0 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.842 [2024-11-20 16:35:25.972370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.842 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:55.843 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:55.843 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:55.843 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:55.843 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.843 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.843 bdev_null1 00:33:55.843 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.843 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:55.843 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.843 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.843 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.843 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:55.843 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.843 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:55.843 { 00:33:55.843 "params": { 00:33:55.843 "name": "Nvme$subsystem", 00:33:55.843 "trtype": "$TEST_TRANSPORT", 00:33:55.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:55.843 "adrfam": "ipv4", 00:33:55.843 "trsvcid": "$NVMF_PORT", 00:33:55.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:55.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:55.843 "hdgst": ${hdgst:-false}, 00:33:55.843 "ddgst": ${ddgst:-false} 00:33:55.843 }, 00:33:55.843 "method": "bdev_nvme_attach_controller" 00:33:55.843 } 00:33:55.843 EOF 00:33:55.843 )") 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:55.843 { 00:33:55.843 "params": { 00:33:55.843 "name": "Nvme$subsystem", 00:33:55.843 "trtype": "$TEST_TRANSPORT", 00:33:55.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:55.843 "adrfam": "ipv4", 00:33:55.843 "trsvcid": "$NVMF_PORT", 00:33:55.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:55.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:55.843 "hdgst": ${hdgst:-false}, 00:33:55.843 "ddgst": ${ddgst:-false} 00:33:55.843 }, 00:33:55.843 "method": "bdev_nvme_attach_controller" 00:33:55.843 } 00:33:55.843 EOF 00:33:55.843 )") 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:55.843 "params": { 00:33:55.843 "name": "Nvme0", 00:33:55.843 "trtype": "tcp", 00:33:55.843 "traddr": "10.0.0.2", 00:33:55.843 "adrfam": "ipv4", 00:33:55.843 "trsvcid": "4420", 00:33:55.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:55.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:55.843 "hdgst": false, 00:33:55.843 "ddgst": false 00:33:55.843 }, 00:33:55.843 "method": "bdev_nvme_attach_controller" 00:33:55.843 },{ 00:33:55.843 "params": { 00:33:55.843 "name": "Nvme1", 00:33:55.843 "trtype": "tcp", 00:33:55.843 "traddr": "10.0.0.2", 00:33:55.843 "adrfam": "ipv4", 00:33:55.843 "trsvcid": "4420", 00:33:55.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:55.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:55.843 "hdgst": false, 00:33:55.843 "ddgst": false 00:33:55.843 }, 00:33:55.843 "method": "bdev_nvme_attach_controller" 00:33:55.843 }' 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:55.843 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:55.843 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:55.843 ... 00:33:55.843 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:55.843 ... 00:33:55.843 fio-3.35 00:33:55.843 Starting 4 threads 00:34:01.112 00:34:01.112 filename0: (groupid=0, jobs=1): err= 0: pid=2182759: Wed Nov 20 16:35:32 2024 00:34:01.112 read: IOPS=2824, BW=22.1MiB/s (23.1MB/s)(110MiB/5003msec) 00:34:01.112 slat (nsec): min=6050, max=45966, avg=10270.68, stdev=4083.44 00:34:01.112 clat (usec): min=638, max=5782, avg=2799.25, stdev=456.37 00:34:01.112 lat (usec): min=657, max=5807, avg=2809.52, stdev=456.30 00:34:01.112 clat percentiles (usec): 00:34:01.112 | 1.00th=[ 1516], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2474], 00:34:01.112 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2900], 00:34:01.112 | 70.00th=[ 2999], 80.00th=[ 3130], 90.00th=[ 3326], 95.00th=[ 3523], 00:34:01.112 | 99.00th=[ 4146], 99.50th=[ 4490], 99.90th=[ 4883], 99.95th=[ 5211], 00:34:01.112 | 99.99th=[ 5735] 00:34:01.112 bw ( KiB/s): min=20576, max=24832, per=26.38%, avg=22513.78, stdev=1296.01, samples=9 00:34:01.112 iops : min= 2572, max= 3104, avg=2814.22, stdev=162.00, samples=9 00:34:01.112 lat (usec) : 750=0.02%, 1000=0.36% 00:34:01.112 lat (msec) : 2=2.26%, 4=95.91%, 10=1.45% 00:34:01.112 cpu : usr=96.68%, sys=2.98%, ctx=8, majf=0, minf=9 00:34:01.112 IO depths : 1=0.4%, 2=11.4%, 4=60.0%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:01.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.112 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.112 issued rwts: total=14131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:01.112 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:01.112 filename0: (groupid=0, jobs=1): err= 0: pid=2182760: Wed Nov 20 16:35:32 2024 00:34:01.112 read: IOPS=2537, BW=19.8MiB/s (20.8MB/s)(99.2MiB/5002msec) 00:34:01.112 slat (nsec): min=6061, max=79181, avg=10938.97, stdev=4592.95 00:34:01.112 clat (usec): min=637, max=6192, avg=3117.71, stdev=589.05 00:34:01.112 lat (usec): min=648, max=6205, avg=3128.65, stdev=588.67 00:34:01.112 clat percentiles (usec): 00:34:01.112 | 1.00th=[ 1893], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2737], 00:34:01.112 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3130], 00:34:01.112 | 70.00th=[ 3261], 80.00th=[ 3458], 90.00th=[ 3884], 95.00th=[ 4359], 00:34:01.112 | 99.00th=[ 5014], 99.50th=[ 5276], 99.90th=[ 5800], 99.95th=[ 5997], 00:34:01.112 | 99.99th=[ 6063] 00:34:01.112 bw ( KiB/s): min=19008, max=20944, per=23.46%, avg=20024.11, stdev=745.03, samples=9 00:34:01.112 iops : min= 2376, max= 2618, avg=2503.00, stdev=93.11, samples=9 00:34:01.112 lat (usec) : 750=0.06%, 1000=0.05% 00:34:01.112 lat (msec) : 2=1.21%, 4=90.08%, 10=8.60% 00:34:01.112 cpu : usr=92.48%, sys=5.32%, ctx=280, majf=0, minf=9 00:34:01.112 IO depths : 1=0.3%, 2=7.1%, 4=64.8%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:01.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.112 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.112 issued rwts: total=12693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:01.112 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:01.112 filename1: (groupid=0, jobs=1): err= 0: pid=2182761: Wed Nov 20 16:35:32 2024 00:34:01.112 read: IOPS=2500, BW=19.5MiB/s (20.5MB/s)(97.7MiB/5001msec) 00:34:01.112 slat (nsec): min=6060, max=45938, avg=10749.65, stdev=4316.71 00:34:01.112 clat (usec): min=590, max=6647, avg=3165.80, stdev=596.44 00:34:01.112 lat (usec): min=601, max=6661, avg=3176.55, stdev=595.94 00:34:01.112 clat percentiles (usec): 00:34:01.112 | 1.00th=[ 1860], 5.00th=[ 2376], 10.00th=[ 2573], 20.00th=[ 2769], 00:34:01.112 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3064], 60.00th=[ 3163], 00:34:01.112 | 70.00th=[ 3294], 80.00th=[ 3523], 90.00th=[ 3982], 95.00th=[ 4359], 00:34:01.112 | 99.00th=[ 5014], 99.50th=[ 5211], 99.90th=[ 5997], 99.95th=[ 6194], 00:34:01.112 | 99.99th=[ 6652] 00:34:01.112 bw ( KiB/s): min=18272, max=20992, per=23.29%, avg=19878.33, stdev=865.53, samples=9 00:34:01.112 iops : min= 2284, max= 2624, avg=2484.78, stdev=108.18, samples=9 00:34:01.112 lat (usec) : 750=0.03%, 1000=0.06% 00:34:01.112 lat (msec) : 2=1.44%, 4=88.72%, 10=9.74% 00:34:01.112 cpu : usr=96.36%, sys=3.30%, ctx=11, majf=0, minf=9 00:34:01.112 IO depths : 1=0.2%, 2=7.5%, 4=64.6%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:01.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.112 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.112 issued rwts: total=12504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:01.112 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:01.112 filename1: (groupid=0, jobs=1): err= 0: pid=2182762: Wed Nov 20 16:35:32 2024 00:34:01.112 read: IOPS=2807, BW=21.9MiB/s (23.0MB/s)(110MiB/5002msec) 00:34:01.112 slat (nsec): min=6062, max=65022, avg=10855.35, stdev=4306.84 00:34:01.112 clat (usec): min=722, max=5808, avg=2813.62, stdev=467.42 00:34:01.112 lat (usec): min=733, max=5821, avg=2824.47, stdev=467.78 00:34:01.112 clat percentiles (usec): 00:34:01.112 | 1.00th=[ 1729], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2474], 00:34:01.112 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2900], 00:34:01.112 | 70.00th=[ 2999], 80.00th=[ 3130], 90.00th=[ 3359], 95.00th=[ 3556], 00:34:01.112 | 99.00th=[ 4293], 99.50th=[ 4686], 99.90th=[ 5211], 99.95th=[ 5211], 00:34:01.112 | 99.99th=[ 5800] 00:34:01.112 bw ( KiB/s): min=20624, max=24160, per=26.36%, avg=22494.22, stdev=1214.96, samples=9 00:34:01.112 iops : min= 2578, max= 3020, avg=2811.78, stdev=151.87, samples=9 00:34:01.112 lat (usec) : 750=0.02%, 1000=0.03% 00:34:01.112 lat (msec) : 2=2.47%, 4=95.73%, 10=1.75% 00:34:01.112 cpu : usr=96.76%, sys=2.92%, ctx=8, majf=0, minf=9 00:34:01.112 IO depths : 1=0.5%, 2=14.4%, 4=57.4%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:01.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.112 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.113 issued rwts: total=14041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:01.113 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:01.113 00:34:01.113 Run status group 0 (all jobs): 00:34:01.113 READ: bw=83.3MiB/s (87.4MB/s), 19.5MiB/s-22.1MiB/s (20.5MB/s-23.1MB/s), io=417MiB (437MB), run=5001-5003msec 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.371 00:34:01.371 real 0m24.876s 00:34:01.371 user 4m53.588s 00:34:01.371 sys 0m4.830s 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.371 16:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:01.371 ************************************ 00:34:01.371 END TEST fio_dif_rand_params 00:34:01.371 ************************************ 00:34:01.371 16:35:32 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:01.371 16:35:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:01.371 16:35:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.371 16:35:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:01.629 ************************************ 00:34:01.629 START TEST fio_dif_digest 00:34:01.629 ************************************ 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:01.629 bdev_null0 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:01.629 [2024-11-20 16:35:32.670495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:01.629 16:35:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:01.629 { 00:34:01.629 "params": { 00:34:01.629 "name": "Nvme$subsystem", 00:34:01.629 "trtype": "$TEST_TRANSPORT", 00:34:01.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:01.629 "adrfam": "ipv4", 00:34:01.629 "trsvcid": "$NVMF_PORT", 00:34:01.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:01.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:01.629 "hdgst": ${hdgst:-false}, 00:34:01.629 "ddgst": ${ddgst:-false} 00:34:01.630 }, 00:34:01.630 "method": "bdev_nvme_attach_controller" 00:34:01.630 } 00:34:01.630 EOF 00:34:01.630 )") 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:01.630 "params": { 00:34:01.630 "name": "Nvme0", 00:34:01.630 "trtype": "tcp", 00:34:01.630 "traddr": "10.0.0.2", 00:34:01.630 "adrfam": "ipv4", 00:34:01.630 "trsvcid": "4420", 00:34:01.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:01.630 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:01.630 "hdgst": true, 00:34:01.630 "ddgst": true 00:34:01.630 }, 00:34:01.630 "method": "bdev_nvme_attach_controller" 00:34:01.630 }' 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:01.630 16:35:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.887 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:01.887 ... 00:34:01.887 fio-3.35 00:34:01.887 Starting 3 threads 00:34:14.108 00:34:14.108 filename0: (groupid=0, jobs=1): err= 0: pid=2183822: Wed Nov 20 16:35:43 2024 00:34:14.108 read: IOPS=296, BW=37.0MiB/s (38.8MB/s)(370MiB/10006msec) 00:34:14.108 slat (nsec): min=6527, max=62141, avg=19228.14, stdev=6321.59 00:34:14.108 clat (usec): min=7673, max=14222, avg=10108.22, stdev=769.38 00:34:14.108 lat (usec): min=7685, max=14250, avg=10127.44, stdev=768.96 00:34:14.108 clat percentiles (usec): 00:34:14.108 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:34:14.108 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:34:14.108 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11469], 00:34:14.108 | 99.00th=[12125], 99.50th=[12256], 99.90th=[13304], 99.95th=[14222], 00:34:14.108 | 99.99th=[14222] 00:34:14.108 bw ( KiB/s): min=34560, max=39168, per=35.32%, avg=37900.80, stdev=1196.36, samples=20 00:34:14.108 iops : min= 270, max= 306, avg=296.10, stdev= 9.35, samples=20 00:34:14.108 lat (msec) : 10=44.35%, 20=55.65% 00:34:14.108 cpu : usr=93.71%, sys=5.20%, ctx=266, majf=0, minf=129 00:34:14.108 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:14.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.108 issued rwts: total=2963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:14.108 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:14.108 filename0: (groupid=0, jobs=1): err= 0: pid=2183823: Wed Nov 20 16:35:43 2024 00:34:14.108 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(353MiB/10045msec) 00:34:14.108 slat (nsec): min=6725, max=50379, avg=17558.79, stdev=7831.83 00:34:14.108 clat (usec): min=7431, max=49781, avg=10651.28, stdev=1330.04 00:34:14.108 lat (usec): min=7444, max=49793, avg=10668.84, stdev=1330.84 00:34:14.108 clat percentiles (usec): 00:34:14.108 | 1.00th=[ 8848], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:34:14.108 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:34:14.108 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:34:14.108 | 99.00th=[12911], 99.50th=[13304], 99.90th=[14877], 99.95th=[47973], 00:34:14.108 | 99.99th=[49546] 00:34:14.108 bw ( KiB/s): min=33280, max=39680, per=33.62%, avg=36070.40, stdev=1483.22, samples=20 00:34:14.108 iops : min= 260, max= 310, avg=281.80, stdev=11.59, samples=20 00:34:14.108 lat (msec) : 10=22.73%, 20=77.20%, 50=0.07% 00:34:14.108 cpu : usr=96.86%, sys=2.82%, ctx=17, majf=0, minf=187 00:34:14.108 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:14.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.108 issued rwts: total=2820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:14.108 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:14.108 filename0: (groupid=0, jobs=1): err= 0: pid=2183824: Wed Nov 20 16:35:43 2024 00:34:14.108 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(330MiB/10004msec) 00:34:14.108 slat (nsec): min=6793, max=48801, avg=17840.80, stdev=7904.80 00:34:14.108 clat (usec): min=5111, max=14838, avg=11360.69, stdev=867.40 00:34:14.108 lat (usec): min=5121, max=14873, avg=11378.53, stdev=866.94 00:34:14.108 clat percentiles (usec): 00:34:14.108 | 1.00th=[ 9634], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:34:14.108 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:34:14.108 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12518], 95.00th=[12911], 00:34:14.108 | 99.00th=[13698], 99.50th=[13960], 99.90th=[14615], 99.95th=[14877], 00:34:14.108 | 99.99th=[14877] 00:34:14.108 bw ( KiB/s): min=30720, max=34816, per=31.44%, avg=33728.00, stdev=1076.15, samples=20 00:34:14.108 iops : min= 240, max= 272, avg=263.50, stdev= 8.41, samples=20 00:34:14.108 lat (msec) : 10=4.17%, 20=95.83% 00:34:14.108 cpu : usr=96.84%, sys=2.83%, ctx=26, majf=0, minf=114 00:34:14.108 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:14.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.108 issued rwts: total=2637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:14.108 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:14.108 00:34:14.108 Run status group 0 (all jobs): 00:34:14.108 READ: bw=105MiB/s (110MB/s), 32.9MiB/s-37.0MiB/s (34.5MB/s-38.8MB/s), io=1053MiB (1104MB), run=10004-10045msec 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.108 00:34:14.108 real 0m11.202s 00:34:14.108 user 0m35.567s 00:34:14.108 sys 0m1.458s 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.108 16:35:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.108 ************************************ 00:34:14.108 END TEST fio_dif_digest 00:34:14.108 ************************************ 00:34:14.108 16:35:43 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:14.108 16:35:43 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:14.108 16:35:43 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:14.108 16:35:43 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:14.108 16:35:43 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:14.108 16:35:43 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:14.108 16:35:43 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:14.108 16:35:43 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:14.108 rmmod nvme_tcp 00:34:14.108 rmmod nvme_fabrics 00:34:14.108 rmmod nvme_keyring 00:34:14.108 16:35:43 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:14.108 16:35:43 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:14.108 16:35:43 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:14.108 16:35:43 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2175437 ']' 00:34:14.108 16:35:43 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2175437 00:34:14.108 16:35:43 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2175437 ']' 00:34:14.108 16:35:43 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2175437 00:34:14.108 16:35:43 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:14.108 16:35:43 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.108 16:35:43 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2175437 00:34:14.108 16:35:43 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:14.108 16:35:43 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:14.108 16:35:43 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2175437' 00:34:14.108 killing process with pid 2175437 00:34:14.108 16:35:43 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2175437 00:34:14.108 16:35:43 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2175437 00:34:14.108 16:35:44 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:14.108 16:35:44 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:16.013 Waiting for block devices as requested 00:34:16.013 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:16.013 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:16.013 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:16.013 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:16.013 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:16.272 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:16.272 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:16.272 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:16.531 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:16.531 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:16.531 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:16.790 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:16.790 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:16.790 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:16.790 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:17.049 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:17.049 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:17.049 16:35:48 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:17.049 16:35:48 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:17.049 16:35:48 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:17.049 16:35:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:17.049 16:35:48 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:17.049 16:35:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:17.049 16:35:48 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.049 16:35:48 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.049 16:35:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.049 16:35:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:17.049 16:35:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.583 16:35:50 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:19.584 00:34:19.584 real 1m14.625s 00:34:19.584 user 7m11.965s 00:34:19.584 sys 0m19.919s 00:34:19.584 16:35:50 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.584 16:35:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:19.584 ************************************ 00:34:19.584 END TEST nvmf_dif 00:34:19.584 ************************************ 00:34:19.584 16:35:50 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:19.584 16:35:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:19.584 16:35:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:19.584 16:35:50 -- common/autotest_common.sh@10 -- # set +x 00:34:19.584 ************************************ 00:34:19.584 START TEST nvmf_abort_qd_sizes 00:34:19.584 ************************************ 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:19.584 * Looking for test storage... 00:34:19.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:19.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.584 --rc genhtml_branch_coverage=1 00:34:19.584 --rc genhtml_function_coverage=1 00:34:19.584 --rc genhtml_legend=1 00:34:19.584 --rc geninfo_all_blocks=1 00:34:19.584 --rc geninfo_unexecuted_blocks=1 00:34:19.584 00:34:19.584 ' 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:19.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.584 --rc genhtml_branch_coverage=1 00:34:19.584 --rc genhtml_function_coverage=1 00:34:19.584 --rc genhtml_legend=1 00:34:19.584 --rc geninfo_all_blocks=1 00:34:19.584 --rc geninfo_unexecuted_blocks=1 00:34:19.584 00:34:19.584 ' 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:19.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.584 --rc genhtml_branch_coverage=1 00:34:19.584 --rc genhtml_function_coverage=1 00:34:19.584 --rc genhtml_legend=1 00:34:19.584 --rc geninfo_all_blocks=1 00:34:19.584 --rc geninfo_unexecuted_blocks=1 00:34:19.584 00:34:19.584 ' 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:19.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.584 --rc genhtml_branch_coverage=1 00:34:19.584 --rc genhtml_function_coverage=1 00:34:19.584 --rc genhtml_legend=1 00:34:19.584 --rc geninfo_all_blocks=1 00:34:19.584 --rc geninfo_unexecuted_blocks=1 00:34:19.584 00:34:19.584 ' 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:19.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.584 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:19.585 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:19.585 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:19.585 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.585 16:35:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:19.585 16:35:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.585 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:19.585 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:19.585 16:35:50 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:19.585 16:35:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:25.054 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:25.054 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:25.054 Found net devices under 0000:86:00.0: cvl_0_0 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:25.054 Found net devices under 0000:86:00.1: cvl_0_1 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:25.054 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.055 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.055 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:25.055 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:25.055 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.055 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.055 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:25.055 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:25.055 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.055 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:25.313 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:25.313 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:25.313 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:25.313 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:25.313 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:25.313 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:25.313 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:25.313 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:25.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:25.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:34:25.313 00:34:25.313 --- 10.0.0.2 ping statistics --- 00:34:25.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.313 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:34:25.313 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:25.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:25.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:34:25.314 00:34:25.314 --- 10.0.0.1 ping statistics --- 00:34:25.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.314 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:34:25.314 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:25.314 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:25.314 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:25.314 16:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:28.602 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:28.602 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:28.602 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:28.602 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:28.602 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:28.602 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:28.602 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:28.603 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:28.603 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:28.603 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:28.603 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:28.603 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:28.603 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:28.603 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:28.603 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:28.603 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:29.540 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2191896 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2191896 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2191896 ']' 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:29.799 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.800 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:29.800 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:29.800 [2024-11-20 16:36:00.923557] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:34:29.800 [2024-11-20 16:36:00.923605] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:29.800 [2024-11-20 16:36:01.002959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:30.058 [2024-11-20 16:36:01.050159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.058 [2024-11-20 16:36:01.050210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.058 [2024-11-20 16:36:01.050221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.058 [2024-11-20 16:36:01.050229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.058 [2024-11-20 16:36:01.050235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.058 [2024-11-20 16:36:01.051911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.058 [2024-11-20 16:36:01.052019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:30.058 [2024-11-20 16:36:01.052051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.058 [2024-11-20 16:36:01.052051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:30.058 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:30.058 ************************************ 00:34:30.058 START TEST spdk_target_abort 00:34:30.058 ************************************ 00:34:30.058 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:30.058 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:30.058 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:30.058 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.058 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:33.366 spdk_targetn1 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:33.366 [2024-11-20 16:36:04.081135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:33.366 [2024-11-20 16:36:04.129485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:33.366 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:36.649 Initializing NVMe Controllers 00:34:36.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:36.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:36.649 Initialization complete. Launching workers. 00:34:36.649 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15425, failed: 0 00:34:36.649 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1355, failed to submit 14070 00:34:36.649 success 704, unsuccessful 651, failed 0 00:34:36.649 16:36:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:36.649 16:36:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:39.929 Initializing NVMe Controllers 00:34:39.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:39.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:39.929 Initialization complete. Launching workers. 00:34:39.929 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8580, failed: 0 00:34:39.929 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1270, failed to submit 7310 00:34:39.929 success 332, unsuccessful 938, failed 0 00:34:39.929 16:36:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:39.929 16:36:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:43.212 Initializing NVMe Controllers 00:34:43.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:43.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:43.212 Initialization complete. Launching workers. 00:34:43.212 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38586, failed: 0 00:34:43.212 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2871, failed to submit 35715 00:34:43.212 success 592, unsuccessful 2279, failed 0 00:34:43.212 16:36:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:43.212 16:36:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.212 16:36:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.212 16:36:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.212 16:36:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:43.212 16:36:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.212 16:36:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:44.586 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.586 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2191896 00:34:44.586 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2191896 ']' 00:34:44.586 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2191896 00:34:44.586 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:44.586 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:44.586 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2191896 00:34:44.586 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:44.586 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:44.586 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2191896' 00:34:44.586 killing process with pid 2191896 00:34:44.586 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2191896 00:34:44.586 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2191896 00:34:44.845 00:34:44.845 real 0m14.667s 00:34:44.845 user 0m55.901s 00:34:44.845 sys 0m2.693s 00:34:44.845 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.845 16:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:44.845 ************************************ 00:34:44.845 END TEST spdk_target_abort 00:34:44.845 ************************************ 00:34:44.845 16:36:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:44.845 16:36:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:44.845 16:36:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:44.845 16:36:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:44.845 ************************************ 00:34:44.845 START TEST kernel_target_abort 00:34:44.845 ************************************ 00:34:44.845 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:44.845 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:44.845 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:44.845 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.845 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.845 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.845 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.845 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.846 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.846 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.846 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.846 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.846 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:44.846 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:44.846 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:44.846 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:44.846 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:44.846 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:44.846 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:44.846 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:44.846 16:36:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:44.846 16:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:44.846 16:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:48.136 Waiting for block devices as requested 00:34:48.136 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:48.136 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:48.136 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:48.136 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:48.136 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:48.136 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:48.136 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:48.136 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:48.136 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:48.395 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:48.395 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:48.395 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:48.654 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:48.654 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:48.654 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:48.913 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:48.913 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:48.913 No valid GPT data, bailing 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:48.913 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:49.171 00:34:49.171 Discovery Log Number of Records 2, Generation counter 2 00:34:49.171 =====Discovery Log Entry 0====== 00:34:49.171 trtype: tcp 00:34:49.171 adrfam: ipv4 00:34:49.171 subtype: current discovery subsystem 00:34:49.171 treq: not specified, sq flow control disable supported 00:34:49.171 portid: 1 00:34:49.171 trsvcid: 4420 00:34:49.171 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:49.171 traddr: 10.0.0.1 00:34:49.171 eflags: none 00:34:49.171 sectype: none 00:34:49.171 =====Discovery Log Entry 1====== 00:34:49.171 trtype: tcp 00:34:49.171 adrfam: ipv4 00:34:49.171 subtype: nvme subsystem 00:34:49.171 treq: not specified, sq flow control disable supported 00:34:49.171 portid: 1 00:34:49.171 trsvcid: 4420 00:34:49.171 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:49.171 traddr: 10.0.0.1 00:34:49.171 eflags: none 00:34:49.171 sectype: none 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:49.171 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:49.172 16:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:52.457 Initializing NVMe Controllers 00:34:52.457 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:52.457 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:52.457 Initialization complete. Launching workers. 00:34:52.457 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95231, failed: 0 00:34:52.457 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95231, failed to submit 0 00:34:52.457 success 0, unsuccessful 95231, failed 0 00:34:52.457 16:36:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:52.457 16:36:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:55.744 Initializing NVMe Controllers 00:34:55.744 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:55.744 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:55.744 Initialization complete. Launching workers. 00:34:55.744 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 150888, failed: 0 00:34:55.744 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38066, failed to submit 112822 00:34:55.744 success 0, unsuccessful 38066, failed 0 00:34:55.744 16:36:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:55.744 16:36:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:59.029 Initializing NVMe Controllers 00:34:59.029 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:59.029 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:59.029 Initialization complete. Launching workers. 00:34:59.029 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142099, failed: 0 00:34:59.029 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35594, failed to submit 106505 00:34:59.029 success 0, unsuccessful 35594, failed 0 00:34:59.029 16:36:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:59.029 16:36:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:59.029 16:36:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:59.029 16:36:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:59.029 16:36:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:59.029 16:36:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:59.029 16:36:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:59.029 16:36:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:59.029 16:36:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:59.029 16:36:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:01.566 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:01.566 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:02.945 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:02.945 00:35:02.945 real 0m18.035s 00:35:02.945 user 0m9.086s 00:35:02.945 sys 0m5.126s 00:35:02.945 16:36:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:02.945 16:36:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.945 ************************************ 00:35:02.945 END TEST kernel_target_abort 00:35:02.945 ************************************ 00:35:02.945 16:36:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:02.945 16:36:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:02.945 16:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:02.945 16:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:02.945 16:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:02.945 16:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:02.945 16:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:02.945 16:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:02.945 rmmod nvme_tcp 00:35:02.945 rmmod nvme_fabrics 00:35:02.945 rmmod nvme_keyring 00:35:02.945 16:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:02.945 16:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:02.945 16:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:02.946 16:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2191896 ']' 00:35:02.946 16:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2191896 00:35:02.946 16:36:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2191896 ']' 00:35:02.946 16:36:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2191896 00:35:02.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2191896) - No such process 00:35:02.946 16:36:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2191896 is not found' 00:35:02.946 Process with pid 2191896 is not found 00:35:02.946 16:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:02.946 16:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:06.239 Waiting for block devices as requested 00:35:06.239 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:06.239 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:06.239 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:06.239 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:06.239 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:06.239 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:06.239 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:06.239 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:06.498 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:06.498 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:06.498 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:06.757 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:06.757 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:06.757 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:06.757 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:07.016 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:07.016 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:07.016 16:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:07.016 16:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:07.016 16:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:07.016 16:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:07.016 16:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:07.016 16:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:07.016 16:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:07.016 16:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:07.016 16:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.016 16:36:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:07.016 16:36:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.551 16:36:40 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:09.551 00:35:09.551 real 0m49.909s 00:35:09.551 user 1m9.349s 00:35:09.551 sys 0m16.578s 00:35:09.551 16:36:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:09.551 16:36:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:09.551 ************************************ 00:35:09.551 END TEST nvmf_abort_qd_sizes 00:35:09.551 ************************************ 00:35:09.551 16:36:40 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:09.551 16:36:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:09.551 16:36:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:09.551 16:36:40 -- common/autotest_common.sh@10 -- # set +x 00:35:09.551 ************************************ 00:35:09.551 START TEST keyring_file 00:35:09.551 ************************************ 00:35:09.551 16:36:40 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:09.551 * Looking for test storage... 00:35:09.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:09.551 16:36:40 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:09.551 16:36:40 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:09.551 16:36:40 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:09.551 16:36:40 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:09.551 16:36:40 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:09.551 16:36:40 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:09.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.551 --rc genhtml_branch_coverage=1 00:35:09.551 --rc genhtml_function_coverage=1 00:35:09.551 --rc genhtml_legend=1 00:35:09.551 --rc geninfo_all_blocks=1 00:35:09.551 --rc geninfo_unexecuted_blocks=1 00:35:09.551 00:35:09.551 ' 00:35:09.551 16:36:40 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:09.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.551 --rc genhtml_branch_coverage=1 00:35:09.551 --rc genhtml_function_coverage=1 00:35:09.551 --rc genhtml_legend=1 00:35:09.551 --rc geninfo_all_blocks=1 00:35:09.551 --rc geninfo_unexecuted_blocks=1 00:35:09.551 00:35:09.551 ' 00:35:09.551 16:36:40 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:09.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.551 --rc genhtml_branch_coverage=1 00:35:09.551 --rc genhtml_function_coverage=1 00:35:09.551 --rc genhtml_legend=1 00:35:09.551 --rc geninfo_all_blocks=1 00:35:09.551 --rc geninfo_unexecuted_blocks=1 00:35:09.551 00:35:09.551 ' 00:35:09.551 16:36:40 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:09.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.551 --rc genhtml_branch_coverage=1 00:35:09.551 --rc genhtml_function_coverage=1 00:35:09.551 --rc genhtml_legend=1 00:35:09.551 --rc geninfo_all_blocks=1 00:35:09.551 --rc geninfo_unexecuted_blocks=1 00:35:09.551 00:35:09.551 ' 00:35:09.551 16:36:40 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:09.551 16:36:40 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:09.551 16:36:40 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.551 16:36:40 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.552 16:36:40 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.552 16:36:40 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.552 16:36:40 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.552 16:36:40 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:09.552 16:36:40 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:09.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:09.552 16:36:40 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:09.552 16:36:40 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:09.552 16:36:40 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:09.552 16:36:40 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:09.552 16:36:40 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:09.552 16:36:40 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uLkzcJ97yd 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uLkzcJ97yd 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uLkzcJ97yd 00:35:09.552 16:36:40 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.uLkzcJ97yd 00:35:09.552 16:36:40 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eEiqtx3bQe 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:09.552 16:36:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eEiqtx3bQe 00:35:09.552 16:36:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eEiqtx3bQe 00:35:09.552 16:36:40 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.eEiqtx3bQe 00:35:09.552 16:36:40 keyring_file -- keyring/file.sh@30 -- # tgtpid=2201141 00:35:09.552 16:36:40 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:09.552 16:36:40 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2201141 00:35:09.552 16:36:40 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2201141 ']' 00:35:09.552 16:36:40 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:09.552 16:36:40 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:09.552 16:36:40 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:09.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:09.552 16:36:40 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:09.552 16:36:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:09.552 [2024-11-20 16:36:40.725800] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:35:09.552 [2024-11-20 16:36:40.725852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2201141 ] 00:35:09.811 [2024-11-20 16:36:40.799149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.811 [2024-11-20 16:36:40.839348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:10.071 16:36:41 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:10.071 [2024-11-20 16:36:41.073940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:10.071 null0 00:35:10.071 [2024-11-20 16:36:41.106002] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:10.071 [2024-11-20 16:36:41.106358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.071 16:36:41 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:10.071 [2024-11-20 16:36:41.134066] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:10.071 request: 00:35:10.071 { 00:35:10.071 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.071 "secure_channel": false, 00:35:10.071 "listen_address": { 00:35:10.071 "trtype": "tcp", 00:35:10.071 "traddr": "127.0.0.1", 00:35:10.071 "trsvcid": "4420" 00:35:10.071 }, 00:35:10.071 "method": "nvmf_subsystem_add_listener", 00:35:10.071 "req_id": 1 00:35:10.071 } 00:35:10.071 Got JSON-RPC error response 00:35:10.071 response: 00:35:10.071 { 00:35:10.071 "code": -32602, 00:35:10.071 "message": "Invalid parameters" 00:35:10.071 } 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:10.071 16:36:41 keyring_file -- keyring/file.sh@47 -- # bperfpid=2201148 00:35:10.071 16:36:41 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2201148 /var/tmp/bperf.sock 00:35:10.071 16:36:41 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2201148 ']' 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:10.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:10.071 16:36:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:10.071 [2024-11-20 16:36:41.188862] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:35:10.071 [2024-11-20 16:36:41.188903] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2201148 ] 00:35:10.071 [2024-11-20 16:36:41.262385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.331 [2024-11-20 16:36:41.303057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:10.331 16:36:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:10.331 16:36:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:10.331 16:36:41 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uLkzcJ97yd 00:35:10.331 16:36:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uLkzcJ97yd 00:35:10.589 16:36:41 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eEiqtx3bQe 00:35:10.589 16:36:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eEiqtx3bQe 00:35:10.589 16:36:41 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:10.589 16:36:41 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:10.589 16:36:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.589 16:36:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:10.589 16:36:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.848 16:36:41 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.uLkzcJ97yd == \/\t\m\p\/\t\m\p\.\u\L\k\z\c\J\9\7\y\d ]] 00:35:10.848 16:36:41 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:10.848 16:36:41 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:10.848 16:36:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.848 16:36:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.848 16:36:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:11.106 16:36:42 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.eEiqtx3bQe == \/\t\m\p\/\t\m\p\.\e\E\i\q\t\x\3\b\Q\e ]] 00:35:11.106 16:36:42 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:11.106 16:36:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:11.106 16:36:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.106 16:36:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.107 16:36:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:11.107 16:36:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.365 16:36:42 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:11.365 16:36:42 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:11.365 16:36:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:11.365 16:36:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.365 16:36:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.365 16:36:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:11.365 16:36:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.365 16:36:42 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:11.365 16:36:42 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.365 16:36:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.624 [2024-11-20 16:36:42.738861] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:11.624 nvme0n1 00:35:11.624 16:36:42 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:11.624 16:36:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:11.624 16:36:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.624 16:36:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.624 16:36:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.624 16:36:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:11.883 16:36:43 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:11.883 16:36:43 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:11.883 16:36:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:11.883 16:36:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.883 16:36:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.883 16:36:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.883 16:36:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:12.142 16:36:43 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:12.142 16:36:43 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.142 Running I/O for 1 seconds... 00:35:13.336 19379.00 IOPS, 75.70 MiB/s 00:35:13.336 Latency(us) 00:35:13.336 [2024-11-20T15:36:44.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.336 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:13.336 nvme0n1 : 1.00 19421.11 75.86 0.00 0.00 6578.08 3011.54 17850.76 00:35:13.336 [2024-11-20T15:36:44.570Z] =================================================================================================================== 00:35:13.336 [2024-11-20T15:36:44.570Z] Total : 19421.11 75.86 0.00 0.00 6578.08 3011.54 17850.76 00:35:13.336 { 00:35:13.336 "results": [ 00:35:13.336 { 00:35:13.336 "job": "nvme0n1", 00:35:13.336 "core_mask": "0x2", 00:35:13.336 "workload": "randrw", 00:35:13.336 "percentage": 50, 00:35:13.336 "status": "finished", 00:35:13.336 "queue_depth": 128, 00:35:13.336 "io_size": 4096, 00:35:13.336 "runtime": 1.004474, 00:35:13.336 "iops": 19421.109954065512, 00:35:13.336 "mibps": 75.86371075806841, 00:35:13.336 "io_failed": 0, 00:35:13.336 "io_timeout": 0, 00:35:13.336 "avg_latency_us": 6578.079536014529, 00:35:13.336 "min_latency_us": 3011.535238095238, 00:35:13.336 "max_latency_us": 17850.758095238096 00:35:13.336 } 00:35:13.336 ], 00:35:13.336 "core_count": 1 00:35:13.336 } 00:35:13.336 16:36:44 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:13.336 16:36:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:13.336 16:36:44 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:13.336 16:36:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:13.336 16:36:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:13.336 16:36:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:13.336 16:36:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:13.336 16:36:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.595 16:36:44 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:13.595 16:36:44 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:13.595 16:36:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:13.595 16:36:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:13.595 16:36:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:13.595 16:36:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:13.595 16:36:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.854 16:36:44 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:13.854 16:36:44 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:13.854 16:36:44 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:13.854 16:36:44 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:13.854 16:36:44 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:13.854 16:36:44 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:13.854 16:36:44 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:13.854 16:36:44 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:13.854 16:36:44 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:13.854 16:36:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:14.114 [2024-11-20 16:36:45.088835] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:14.114 [2024-11-20 16:36:45.089546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95c1f0 (107): Transport endpoint is not connected 00:35:14.114 [2024-11-20 16:36:45.090542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95c1f0 (9): Bad file descriptor 00:35:14.114 [2024-11-20 16:36:45.091543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:14.114 [2024-11-20 16:36:45.091552] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:14.114 [2024-11-20 16:36:45.091559] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:14.114 [2024-11-20 16:36:45.091567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:14.114 request: 00:35:14.114 { 00:35:14.114 "name": "nvme0", 00:35:14.114 "trtype": "tcp", 00:35:14.114 "traddr": "127.0.0.1", 00:35:14.114 "adrfam": "ipv4", 00:35:14.114 "trsvcid": "4420", 00:35:14.114 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.114 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.114 "prchk_reftag": false, 00:35:14.114 "prchk_guard": false, 00:35:14.114 "hdgst": false, 00:35:14.114 "ddgst": false, 00:35:14.114 "psk": "key1", 00:35:14.114 "allow_unrecognized_csi": false, 00:35:14.114 "method": "bdev_nvme_attach_controller", 00:35:14.114 "req_id": 1 00:35:14.114 } 00:35:14.114 Got JSON-RPC error response 00:35:14.114 response: 00:35:14.114 { 00:35:14.114 "code": -5, 00:35:14.114 "message": "Input/output error" 00:35:14.114 } 00:35:14.114 16:36:45 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:14.114 16:36:45 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:14.114 16:36:45 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:14.114 16:36:45 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:14.114 16:36:45 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:14.114 16:36:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:14.114 16:36:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:14.114 16:36:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:14.114 16:36:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:14.114 16:36:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.114 16:36:45 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:14.114 16:36:45 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:14.114 16:36:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:14.114 16:36:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:14.114 16:36:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:14.114 16:36:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:14.114 16:36:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.373 16:36:45 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:14.373 16:36:45 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:14.373 16:36:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:14.631 16:36:45 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:14.632 16:36:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:14.891 16:36:45 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:14.891 16:36:45 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:14.891 16:36:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.891 16:36:46 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:14.891 16:36:46 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.uLkzcJ97yd 00:35:14.891 16:36:46 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.uLkzcJ97yd 00:35:14.891 16:36:46 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:14.891 16:36:46 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.uLkzcJ97yd 00:35:14.891 16:36:46 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:14.891 16:36:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.891 16:36:46 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:14.891 16:36:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.891 16:36:46 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uLkzcJ97yd 00:35:14.891 16:36:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uLkzcJ97yd 00:35:15.149 [2024-11-20 16:36:46.236555] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uLkzcJ97yd': 0100660 00:35:15.149 [2024-11-20 16:36:46.236579] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:15.149 request: 00:35:15.149 { 00:35:15.149 "name": "key0", 00:35:15.149 "path": "/tmp/tmp.uLkzcJ97yd", 00:35:15.149 "method": "keyring_file_add_key", 00:35:15.149 "req_id": 1 00:35:15.149 } 00:35:15.149 Got JSON-RPC error response 00:35:15.149 response: 00:35:15.149 { 00:35:15.149 "code": -1, 00:35:15.150 "message": "Operation not permitted" 00:35:15.150 } 00:35:15.150 16:36:46 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:15.150 16:36:46 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:15.150 16:36:46 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:15.150 16:36:46 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:15.150 16:36:46 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.uLkzcJ97yd 00:35:15.150 16:36:46 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uLkzcJ97yd 00:35:15.150 16:36:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uLkzcJ97yd 00:35:15.408 16:36:46 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.uLkzcJ97yd 00:35:15.408 16:36:46 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:15.408 16:36:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:15.408 16:36:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:15.408 16:36:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.408 16:36:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:15.408 16:36:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.667 16:36:46 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:15.667 16:36:46 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:15.667 16:36:46 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:15.667 16:36:46 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:15.668 16:36:46 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:15.668 16:36:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.668 16:36:46 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:15.668 16:36:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.668 16:36:46 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:15.668 16:36:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:15.668 [2024-11-20 16:36:46.830125] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.uLkzcJ97yd': No such file or directory 00:35:15.668 [2024-11-20 16:36:46.830145] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:15.668 [2024-11-20 16:36:46.830160] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:15.668 [2024-11-20 16:36:46.830166] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:15.668 [2024-11-20 16:36:46.830173] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:15.668 [2024-11-20 16:36:46.830179] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:15.668 request: 00:35:15.668 { 00:35:15.668 "name": "nvme0", 00:35:15.668 "trtype": "tcp", 00:35:15.668 "traddr": "127.0.0.1", 00:35:15.668 "adrfam": "ipv4", 00:35:15.668 "trsvcid": "4420", 00:35:15.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.668 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:15.668 "prchk_reftag": false, 00:35:15.668 "prchk_guard": false, 00:35:15.668 "hdgst": false, 00:35:15.668 "ddgst": false, 00:35:15.668 "psk": "key0", 00:35:15.668 "allow_unrecognized_csi": false, 00:35:15.668 "method": "bdev_nvme_attach_controller", 00:35:15.668 "req_id": 1 00:35:15.668 } 00:35:15.668 Got JSON-RPC error response 00:35:15.668 response: 00:35:15.668 { 00:35:15.668 "code": -19, 00:35:15.668 "message": "No such device" 00:35:15.668 } 00:35:15.668 16:36:46 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:15.668 16:36:46 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:15.668 16:36:46 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:15.668 16:36:46 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:15.668 16:36:46 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:15.668 16:36:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:15.926 16:36:47 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:15.926 16:36:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:15.927 16:36:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:15.927 16:36:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:15.927 16:36:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:15.927 16:36:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:15.927 16:36:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.RjzKmCjhX4 00:35:15.927 16:36:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:15.927 16:36:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:15.927 16:36:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:15.927 16:36:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:15.927 16:36:47 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:15.927 16:36:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:15.927 16:36:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:15.927 16:36:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RjzKmCjhX4 00:35:15.927 16:36:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.RjzKmCjhX4 00:35:15.927 16:36:47 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.RjzKmCjhX4 00:35:15.927 16:36:47 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RjzKmCjhX4 00:35:15.927 16:36:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RjzKmCjhX4 00:35:16.185 16:36:47 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:16.185 16:36:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:16.443 nvme0n1 00:35:16.443 16:36:47 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:16.443 16:36:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.443 16:36:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:16.443 16:36:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.443 16:36:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.443 16:36:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.701 16:36:47 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:16.701 16:36:47 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:16.701 16:36:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:16.702 16:36:47 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:16.702 16:36:47 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:16.702 16:36:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.702 16:36:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.702 16:36:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.960 16:36:48 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:16.960 16:36:48 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:16.960 16:36:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:16.960 16:36:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.960 16:36:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.960 16:36:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.960 16:36:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:17.219 16:36:48 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:17.219 16:36:48 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:17.219 16:36:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:17.478 16:36:48 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:17.478 16:36:48 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:17.478 16:36:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:17.478 16:36:48 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:17.478 16:36:48 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RjzKmCjhX4 00:35:17.478 16:36:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RjzKmCjhX4 00:35:17.742 16:36:48 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eEiqtx3bQe 00:35:17.742 16:36:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eEiqtx3bQe 00:35:18.015 16:36:49 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:18.015 16:36:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:18.303 nvme0n1 00:35:18.303 16:36:49 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:18.303 16:36:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:18.577 16:36:49 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:18.577 "subsystems": [ 00:35:18.577 { 00:35:18.577 "subsystem": "keyring", 00:35:18.577 "config": [ 00:35:18.577 { 00:35:18.577 "method": "keyring_file_add_key", 00:35:18.577 "params": { 00:35:18.577 "name": "key0", 00:35:18.577 "path": "/tmp/tmp.RjzKmCjhX4" 00:35:18.577 } 00:35:18.577 }, 00:35:18.577 { 00:35:18.577 "method": "keyring_file_add_key", 00:35:18.577 "params": { 00:35:18.577 "name": "key1", 00:35:18.577 "path": "/tmp/tmp.eEiqtx3bQe" 00:35:18.577 } 00:35:18.577 } 00:35:18.577 ] 00:35:18.577 }, 00:35:18.577 { 00:35:18.577 "subsystem": "iobuf", 00:35:18.578 "config": [ 00:35:18.578 { 00:35:18.578 "method": "iobuf_set_options", 00:35:18.578 "params": { 00:35:18.578 "small_pool_count": 8192, 00:35:18.578 "large_pool_count": 1024, 00:35:18.578 "small_bufsize": 8192, 00:35:18.578 "large_bufsize": 135168, 00:35:18.578 "enable_numa": false 00:35:18.578 } 00:35:18.578 } 00:35:18.578 ] 00:35:18.578 }, 00:35:18.578 { 00:35:18.578 "subsystem": "sock", 00:35:18.578 "config": [ 00:35:18.578 { 00:35:18.578 "method": "sock_set_default_impl", 00:35:18.578 "params": { 00:35:18.578 "impl_name": "posix" 00:35:18.578 } 00:35:18.578 }, 00:35:18.578 { 00:35:18.578 "method": "sock_impl_set_options", 00:35:18.578 "params": { 00:35:18.578 "impl_name": "ssl", 00:35:18.578 "recv_buf_size": 4096, 00:35:18.578 "send_buf_size": 4096, 00:35:18.578 "enable_recv_pipe": true, 00:35:18.578 "enable_quickack": false, 00:35:18.578 "enable_placement_id": 0, 00:35:18.578 "enable_zerocopy_send_server": true, 00:35:18.578 "enable_zerocopy_send_client": false, 00:35:18.578 "zerocopy_threshold": 0, 00:35:18.578 "tls_version": 0, 00:35:18.578 "enable_ktls": false 00:35:18.578 } 00:35:18.578 }, 00:35:18.578 { 00:35:18.578 "method": "sock_impl_set_options", 00:35:18.578 "params": { 00:35:18.578 "impl_name": "posix", 00:35:18.578 "recv_buf_size": 2097152, 00:35:18.578 "send_buf_size": 2097152, 00:35:18.578 "enable_recv_pipe": true, 00:35:18.578 "enable_quickack": false, 00:35:18.578 "enable_placement_id": 0, 00:35:18.578 "enable_zerocopy_send_server": true, 00:35:18.578 "enable_zerocopy_send_client": false, 00:35:18.578 "zerocopy_threshold": 0, 00:35:18.578 "tls_version": 0, 00:35:18.578 "enable_ktls": false 00:35:18.578 } 00:35:18.578 } 00:35:18.578 ] 00:35:18.578 }, 00:35:18.578 { 00:35:18.578 "subsystem": "vmd", 00:35:18.578 "config": [] 00:35:18.578 }, 00:35:18.578 { 00:35:18.578 "subsystem": "accel", 00:35:18.578 "config": [ 00:35:18.578 { 00:35:18.578 "method": "accel_set_options", 00:35:18.578 "params": { 00:35:18.578 "small_cache_size": 128, 00:35:18.578 "large_cache_size": 16, 00:35:18.578 "task_count": 2048, 00:35:18.578 "sequence_count": 2048, 00:35:18.578 "buf_count": 2048 00:35:18.578 } 00:35:18.578 } 00:35:18.578 ] 00:35:18.578 }, 00:35:18.578 { 00:35:18.578 "subsystem": "bdev", 00:35:18.578 "config": [ 00:35:18.578 { 00:35:18.578 "method": "bdev_set_options", 00:35:18.578 "params": { 00:35:18.578 "bdev_io_pool_size": 65535, 00:35:18.578 "bdev_io_cache_size": 256, 00:35:18.578 "bdev_auto_examine": true, 00:35:18.578 "iobuf_small_cache_size": 128, 00:35:18.578 "iobuf_large_cache_size": 16 00:35:18.578 } 00:35:18.578 }, 00:35:18.578 { 00:35:18.578 "method": "bdev_raid_set_options", 00:35:18.578 "params": { 00:35:18.578 "process_window_size_kb": 1024, 00:35:18.578 "process_max_bandwidth_mb_sec": 0 00:35:18.578 } 00:35:18.578 }, 00:35:18.578 { 00:35:18.578 "method": "bdev_iscsi_set_options", 00:35:18.578 "params": { 00:35:18.578 "timeout_sec": 30 00:35:18.578 } 00:35:18.578 }, 00:35:18.578 { 00:35:18.578 "method": "bdev_nvme_set_options", 00:35:18.578 "params": { 00:35:18.578 "action_on_timeout": "none", 00:35:18.578 "timeout_us": 0, 00:35:18.578 "timeout_admin_us": 0, 00:35:18.578 "keep_alive_timeout_ms": 10000, 00:35:18.578 "arbitration_burst": 0, 00:35:18.578 "low_priority_weight": 0, 00:35:18.578 "medium_priority_weight": 0, 00:35:18.578 "high_priority_weight": 0, 00:35:18.578 "nvme_adminq_poll_period_us": 10000, 00:35:18.578 "nvme_ioq_poll_period_us": 0, 00:35:18.578 "io_queue_requests": 512, 00:35:18.578 "delay_cmd_submit": true, 00:35:18.578 "transport_retry_count": 4, 00:35:18.578 "bdev_retry_count": 3, 00:35:18.578 "transport_ack_timeout": 0, 00:35:18.578 "ctrlr_loss_timeout_sec": 0, 00:35:18.578 "reconnect_delay_sec": 0, 00:35:18.578 "fast_io_fail_timeout_sec": 0, 00:35:18.578 "disable_auto_failback": false, 00:35:18.578 "generate_uuids": false, 00:35:18.578 "transport_tos": 0, 00:35:18.578 "nvme_error_stat": false, 00:35:18.578 "rdma_srq_size": 0, 00:35:18.578 "io_path_stat": false, 00:35:18.578 "allow_accel_sequence": false, 00:35:18.578 "rdma_max_cq_size": 0, 00:35:18.578 "rdma_cm_event_timeout_ms": 0, 00:35:18.578 "dhchap_digests": [ 00:35:18.578 "sha256", 00:35:18.578 "sha384", 00:35:18.578 "sha512" 00:35:18.578 ], 00:35:18.578 "dhchap_dhgroups": [ 00:35:18.578 "null", 00:35:18.578 "ffdhe2048", 00:35:18.578 "ffdhe3072", 00:35:18.578 "ffdhe4096", 00:35:18.578 "ffdhe6144", 00:35:18.578 "ffdhe8192" 00:35:18.578 ] 00:35:18.578 } 00:35:18.578 }, 00:35:18.578 { 00:35:18.578 "method": "bdev_nvme_attach_controller", 00:35:18.578 "params": { 00:35:18.578 "name": "nvme0", 00:35:18.578 "trtype": "TCP", 00:35:18.578 "adrfam": "IPv4", 00:35:18.578 "traddr": "127.0.0.1", 00:35:18.578 "trsvcid": "4420", 00:35:18.578 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.578 "prchk_reftag": false, 00:35:18.578 "prchk_guard": false, 00:35:18.578 "ctrlr_loss_timeout_sec": 0, 00:35:18.578 "reconnect_delay_sec": 0, 00:35:18.578 "fast_io_fail_timeout_sec": 0, 00:35:18.578 "psk": "key0", 00:35:18.578 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:18.578 "hdgst": false, 00:35:18.578 "ddgst": false, 00:35:18.578 "multipath": "multipath" 00:35:18.578 } 00:35:18.578 }, 00:35:18.578 { 00:35:18.578 "method": "bdev_nvme_set_hotplug", 00:35:18.578 "params": { 00:35:18.578 "period_us": 100000, 00:35:18.578 "enable": false 00:35:18.578 } 00:35:18.578 }, 00:35:18.578 { 00:35:18.578 "method": "bdev_wait_for_examine" 00:35:18.578 } 00:35:18.578 ] 00:35:18.578 }, 00:35:18.578 { 00:35:18.578 "subsystem": "nbd", 00:35:18.578 "config": [] 00:35:18.578 } 00:35:18.578 ] 00:35:18.578 }' 00:35:18.578 16:36:49 keyring_file -- keyring/file.sh@115 -- # killprocess 2201148 00:35:18.578 16:36:49 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2201148 ']' 00:35:18.578 16:36:49 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2201148 00:35:18.578 16:36:49 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:18.578 16:36:49 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.578 16:36:49 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2201148 00:35:18.578 16:36:49 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:18.578 16:36:49 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:18.578 16:36:49 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2201148' 00:35:18.578 killing process with pid 2201148 00:35:18.578 16:36:49 keyring_file -- common/autotest_common.sh@973 -- # kill 2201148 00:35:18.578 Received shutdown signal, test time was about 1.000000 seconds 00:35:18.578 00:35:18.578 Latency(us) 00:35:18.578 [2024-11-20T15:36:49.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.578 [2024-11-20T15:36:49.812Z] =================================================================================================================== 00:35:18.578 [2024-11-20T15:36:49.812Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.578 16:36:49 keyring_file -- common/autotest_common.sh@978 -- # wait 2201148 00:35:18.578 16:36:49 keyring_file -- keyring/file.sh@118 -- # bperfpid=2202664 00:35:18.578 16:36:49 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2202664 /var/tmp/bperf.sock 00:35:18.578 16:36:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2202664 ']' 00:35:18.578 16:36:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:18.578 16:36:49 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:18.578 16:36:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:18.578 16:36:49 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:18.578 "subsystems": [ 00:35:18.579 { 00:35:18.579 "subsystem": "keyring", 00:35:18.579 "config": [ 00:35:18.579 { 00:35:18.579 "method": "keyring_file_add_key", 00:35:18.579 "params": { 00:35:18.579 "name": "key0", 00:35:18.579 "path": "/tmp/tmp.RjzKmCjhX4" 00:35:18.579 } 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "method": "keyring_file_add_key", 00:35:18.579 "params": { 00:35:18.579 "name": "key1", 00:35:18.579 "path": "/tmp/tmp.eEiqtx3bQe" 00:35:18.579 } 00:35:18.579 } 00:35:18.579 ] 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "subsystem": "iobuf", 00:35:18.579 "config": [ 00:35:18.579 { 00:35:18.579 "method": "iobuf_set_options", 00:35:18.579 "params": { 00:35:18.579 "small_pool_count": 8192, 00:35:18.579 "large_pool_count": 1024, 00:35:18.579 "small_bufsize": 8192, 00:35:18.579 "large_bufsize": 135168, 00:35:18.579 "enable_numa": false 00:35:18.579 } 00:35:18.579 } 00:35:18.579 ] 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "subsystem": "sock", 00:35:18.579 "config": [ 00:35:18.579 { 00:35:18.579 "method": "sock_set_default_impl", 00:35:18.579 "params": { 00:35:18.579 "impl_name": "posix" 00:35:18.579 } 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "method": "sock_impl_set_options", 00:35:18.579 "params": { 00:35:18.579 "impl_name": "ssl", 00:35:18.579 "recv_buf_size": 4096, 00:35:18.579 "send_buf_size": 4096, 00:35:18.579 "enable_recv_pipe": true, 00:35:18.579 "enable_quickack": false, 00:35:18.579 "enable_placement_id": 0, 00:35:18.579 "enable_zerocopy_send_server": true, 00:35:18.579 "enable_zerocopy_send_client": false, 00:35:18.579 "zerocopy_threshold": 0, 00:35:18.579 "tls_version": 0, 00:35:18.579 "enable_ktls": false 00:35:18.579 } 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "method": "sock_impl_set_options", 00:35:18.579 "params": { 00:35:18.579 "impl_name": "posix", 00:35:18.579 "recv_buf_size": 2097152, 00:35:18.579 "send_buf_size": 2097152, 00:35:18.579 "enable_recv_pipe": true, 00:35:18.579 "enable_quickack": false, 00:35:18.579 "enable_placement_id": 0, 00:35:18.579 "enable_zerocopy_send_server": true, 00:35:18.579 "enable_zerocopy_send_client": false, 00:35:18.579 "zerocopy_threshold": 0, 00:35:18.579 "tls_version": 0, 00:35:18.579 "enable_ktls": false 00:35:18.579 } 00:35:18.579 } 00:35:18.579 ] 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "subsystem": "vmd", 00:35:18.579 "config": [] 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "subsystem": "accel", 00:35:18.579 "config": [ 00:35:18.579 { 00:35:18.579 "method": "accel_set_options", 00:35:18.579 "params": { 00:35:18.579 "small_cache_size": 128, 00:35:18.579 "large_cache_size": 16, 00:35:18.579 "task_count": 2048, 00:35:18.579 "sequence_count": 2048, 00:35:18.579 "buf_count": 2048 00:35:18.579 } 00:35:18.579 } 00:35:18.579 ] 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "subsystem": "bdev", 00:35:18.579 "config": [ 00:35:18.579 { 00:35:18.579 "method": "bdev_set_options", 00:35:18.579 "params": { 00:35:18.579 "bdev_io_pool_size": 65535, 00:35:18.579 "bdev_io_cache_size": 256, 00:35:18.579 "bdev_auto_examine": true, 00:35:18.579 "iobuf_small_cache_size": 128, 00:35:18.579 "iobuf_large_cache_size": 16 00:35:18.579 } 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "method": "bdev_raid_set_options", 00:35:18.579 "params": { 00:35:18.579 "process_window_size_kb": 1024, 00:35:18.579 "process_max_bandwidth_mb_sec": 0 00:35:18.579 } 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "method": "bdev_iscsi_set_options", 00:35:18.579 "params": { 00:35:18.579 "timeout_sec": 30 00:35:18.579 } 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "method": "bdev_nvme_set_options", 00:35:18.579 "params": { 00:35:18.579 "action_on_timeout": "none", 00:35:18.579 "timeout_us": 0, 00:35:18.579 "timeout_admin_us": 0, 00:35:18.579 "keep_alive_timeout_ms": 10000, 00:35:18.579 "arbitration_burst": 0, 00:35:18.579 "low_priority_weight": 0, 00:35:18.579 "medium_priority_weight": 0, 00:35:18.579 "high_priority_weight": 0, 00:35:18.579 "nvme_adminq_poll_period_us": 10000, 00:35:18.579 "nvme_ioq_poll_period_us": 0, 00:35:18.579 "io_queue_requests": 512, 00:35:18.579 "delay_cmd_submit": true, 00:35:18.579 "transport_retry_count": 4, 00:35:18.579 "bdev_retry_count": 3, 00:35:18.579 "transport_ack_timeout": 0, 00:35:18.579 "ctrlr_loss_timeout_sec": 0, 00:35:18.579 "reconnect_delay_sec": 0, 00:35:18.579 "fast_io_fail_timeout_sec": 0, 00:35:18.579 "disable_auto_failback": false, 00:35:18.579 "generate_uuids": false, 00:35:18.579 "transport_tos": 0, 00:35:18.579 "nvme_error_stat": false, 00:35:18.579 "rdma_srq_size": 0, 00:35:18.579 "io_path_stat": false, 00:35:18.579 "allow_accel_sequence": false, 00:35:18.579 "rdma_max_cq_size": 0, 00:35:18.579 "rdma_cm_event_timeout_ms": 0, 00:35:18.579 "dhchap_digests": [ 00:35:18.579 "sha256", 00:35:18.579 "sha384", 00:35:18.579 "sha512" 00:35:18.579 ], 00:35:18.579 "dhchap_dhgroups": [ 00:35:18.579 "null", 00:35:18.579 "ffdhe2048", 00:35:18.579 "ffdhe3072", 00:35:18.579 "ffdhe4096", 00:35:18.579 "ffdhe6144", 00:35:18.579 "ffdhe8192" 00:35:18.579 ] 00:35:18.579 } 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "method": "bdev_nvme_attach_controller", 00:35:18.579 "params": { 00:35:18.579 "name": "nvme0", 00:35:18.579 "trtype": "TCP", 00:35:18.579 "adrfam": "IPv4", 00:35:18.579 "traddr": "127.0.0.1", 00:35:18.579 "trsvcid": "4420", 00:35:18.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.579 "prchk_reftag": false, 00:35:18.579 "prchk_guard": false, 00:35:18.579 "ctrlr_loss_timeout_sec": 0, 00:35:18.579 "reconnect_delay_sec": 0, 00:35:18.579 "fast_io_fail_timeout_sec": 0, 00:35:18.579 "psk": "key0", 00:35:18.579 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:18.579 "hdgst": false, 00:35:18.579 "ddgst": false, 00:35:18.579 "multipath": "multipath" 00:35:18.579 } 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "method": "bdev_nvme_set_hotplug", 00:35:18.579 "params": { 00:35:18.579 "period_us": 100000, 00:35:18.579 "enable": false 00:35:18.579 } 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "method": "bdev_wait_for_examine" 00:35:18.579 } 00:35:18.579 ] 00:35:18.579 }, 00:35:18.579 { 00:35:18.579 "subsystem": "nbd", 00:35:18.579 "config": [] 00:35:18.579 } 00:35:18.579 ] 00:35:18.579 }' 00:35:18.579 16:36:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:18.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:18.579 16:36:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:18.579 16:36:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:18.839 [2024-11-20 16:36:49.826857] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:35:18.839 [2024-11-20 16:36:49.826904] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2202664 ] 00:35:18.839 [2024-11-20 16:36:49.901325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.839 [2024-11-20 16:36:49.943221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.098 [2024-11-20 16:36:50.107225] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:19.666 16:36:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.666 16:36:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:19.666 16:36:50 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:19.666 16:36:50 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:19.666 16:36:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.666 16:36:50 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:19.666 16:36:50 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:19.666 16:36:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:19.666 16:36:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:19.666 16:36:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.666 16:36:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:19.666 16:36:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.925 16:36:51 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:19.925 16:36:51 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:19.925 16:36:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:19.925 16:36:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:19.925 16:36:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.925 16:36:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:19.925 16:36:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:20.184 16:36:51 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:20.184 16:36:51 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:20.184 16:36:51 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:20.184 16:36:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:20.444 16:36:51 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:20.444 16:36:51 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:20.444 16:36:51 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.RjzKmCjhX4 /tmp/tmp.eEiqtx3bQe 00:35:20.444 16:36:51 keyring_file -- keyring/file.sh@20 -- # killprocess 2202664 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2202664 ']' 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2202664 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2202664 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2202664' 00:35:20.444 killing process with pid 2202664 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@973 -- # kill 2202664 00:35:20.444 Received shutdown signal, test time was about 1.000000 seconds 00:35:20.444 00:35:20.444 Latency(us) 00:35:20.444 [2024-11-20T15:36:51.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.444 [2024-11-20T15:36:51.678Z] =================================================================================================================== 00:35:20.444 [2024-11-20T15:36:51.678Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@978 -- # wait 2202664 00:35:20.444 16:36:51 keyring_file -- keyring/file.sh@21 -- # killprocess 2201141 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2201141 ']' 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2201141 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.444 16:36:51 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2201141 00:35:20.703 16:36:51 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:20.703 16:36:51 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:20.703 16:36:51 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2201141' 00:35:20.703 killing process with pid 2201141 00:35:20.703 16:36:51 keyring_file -- common/autotest_common.sh@973 -- # kill 2201141 00:35:20.703 16:36:51 keyring_file -- common/autotest_common.sh@978 -- # wait 2201141 00:35:20.962 00:35:20.962 real 0m11.656s 00:35:20.962 user 0m28.895s 00:35:20.962 sys 0m2.696s 00:35:20.962 16:36:52 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:20.962 16:36:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:20.962 ************************************ 00:35:20.962 END TEST keyring_file 00:35:20.962 ************************************ 00:35:20.962 16:36:52 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:20.962 16:36:52 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:20.962 16:36:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:20.962 16:36:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:20.962 16:36:52 -- common/autotest_common.sh@10 -- # set +x 00:35:20.962 ************************************ 00:35:20.962 START TEST keyring_linux 00:35:20.962 ************************************ 00:35:20.962 16:36:52 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:20.962 Joined session keyring: 415160226 00:35:20.962 * Looking for test storage... 00:35:20.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:20.962 16:36:52 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:20.962 16:36:52 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:20.962 16:36:52 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:21.222 16:36:52 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:21.222 16:36:52 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:21.222 16:36:52 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:21.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.222 --rc genhtml_branch_coverage=1 00:35:21.222 --rc genhtml_function_coverage=1 00:35:21.222 --rc genhtml_legend=1 00:35:21.222 --rc geninfo_all_blocks=1 00:35:21.222 --rc geninfo_unexecuted_blocks=1 00:35:21.222 00:35:21.222 ' 00:35:21.222 16:36:52 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:21.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.222 --rc genhtml_branch_coverage=1 00:35:21.222 --rc genhtml_function_coverage=1 00:35:21.222 --rc genhtml_legend=1 00:35:21.222 --rc geninfo_all_blocks=1 00:35:21.222 --rc geninfo_unexecuted_blocks=1 00:35:21.222 00:35:21.222 ' 00:35:21.222 16:36:52 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:21.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.222 --rc genhtml_branch_coverage=1 00:35:21.222 --rc genhtml_function_coverage=1 00:35:21.222 --rc genhtml_legend=1 00:35:21.222 --rc geninfo_all_blocks=1 00:35:21.222 --rc geninfo_unexecuted_blocks=1 00:35:21.222 00:35:21.222 ' 00:35:21.222 16:36:52 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:21.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.222 --rc genhtml_branch_coverage=1 00:35:21.222 --rc genhtml_function_coverage=1 00:35:21.222 --rc genhtml_legend=1 00:35:21.222 --rc geninfo_all_blocks=1 00:35:21.222 --rc geninfo_unexecuted_blocks=1 00:35:21.222 00:35:21.222 ' 00:35:21.222 16:36:52 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:21.222 16:36:52 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.222 16:36:52 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.222 16:36:52 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.223 16:36:52 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.223 16:36:52 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.223 16:36:52 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.223 16:36:52 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.223 16:36:52 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:21.223 16:36:52 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:21.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:21.223 16:36:52 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:21.223 16:36:52 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:21.223 16:36:52 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:21.223 16:36:52 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:21.223 16:36:52 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:21.223 16:36:52 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:21.223 /tmp/:spdk-test:key0 00:35:21.223 16:36:52 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:21.223 16:36:52 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:21.223 16:36:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:21.223 /tmp/:spdk-test:key1 00:35:21.223 16:36:52 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2203222 00:35:21.223 16:36:52 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2203222 00:35:21.223 16:36:52 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:21.223 16:36:52 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2203222 ']' 00:35:21.223 16:36:52 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:21.223 16:36:52 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:21.223 16:36:52 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:21.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:21.223 16:36:52 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:21.223 16:36:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:21.223 [2024-11-20 16:36:52.443876] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:35:21.223 [2024-11-20 16:36:52.443929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203222 ] 00:35:21.483 [2024-11-20 16:36:52.518503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.483 [2024-11-20 16:36:52.557701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.052 16:36:53 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.052 16:36:53 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:22.052 16:36:53 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:22.052 16:36:53 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.052 16:36:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:22.052 [2024-11-20 16:36:53.281068] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:22.311 null0 00:35:22.311 [2024-11-20 16:36:53.313113] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:22.311 [2024-11-20 16:36:53.313494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:22.311 16:36:53 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.311 16:36:53 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:22.311 804646347 00:35:22.311 16:36:53 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:22.311 923525748 00:35:22.311 16:36:53 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2203358 00:35:22.311 16:36:53 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2203358 /var/tmp/bperf.sock 00:35:22.311 16:36:53 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:22.311 16:36:53 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2203358 ']' 00:35:22.311 16:36:53 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:22.311 16:36:53 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.311 16:36:53 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:22.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:22.311 16:36:53 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.311 16:36:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:22.311 [2024-11-20 16:36:53.386424] Starting SPDK v25.01-pre git sha1 66a383faf / DPDK 24.03.0 initialization... 00:35:22.311 [2024-11-20 16:36:53.386468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203358 ] 00:35:22.311 [2024-11-20 16:36:53.461652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.311 [2024-11-20 16:36:53.503577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:22.311 16:36:53 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.311 16:36:53 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:22.311 16:36:53 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:22.311 16:36:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:22.571 16:36:53 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:22.571 16:36:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:22.830 16:36:53 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:22.830 16:36:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:23.089 [2024-11-20 16:36:54.160037] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:23.089 nvme0n1 00:35:23.089 16:36:54 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:23.089 16:36:54 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:23.089 16:36:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:23.089 16:36:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:23.089 16:36:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:23.089 16:36:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:23.347 16:36:54 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:23.347 16:36:54 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:23.347 16:36:54 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:23.347 16:36:54 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:23.347 16:36:54 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:23.347 16:36:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:23.347 16:36:54 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:23.606 16:36:54 keyring_linux -- keyring/linux.sh@25 -- # sn=804646347 00:35:23.606 16:36:54 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:23.606 16:36:54 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:23.606 16:36:54 keyring_linux -- keyring/linux.sh@26 -- # [[ 804646347 == \8\0\4\6\4\6\3\4\7 ]] 00:35:23.606 16:36:54 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 804646347 00:35:23.606 16:36:54 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:23.606 16:36:54 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:23.606 Running I/O for 1 seconds... 00:35:24.542 21864.00 IOPS, 85.41 MiB/s 00:35:24.543 Latency(us) 00:35:24.543 [2024-11-20T15:36:55.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.543 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:24.543 nvme0n1 : 1.01 21864.02 85.41 0.00 0.00 5835.04 3323.61 8363.64 00:35:24.543 [2024-11-20T15:36:55.777Z] =================================================================================================================== 00:35:24.543 [2024-11-20T15:36:55.777Z] Total : 21864.02 85.41 0.00 0.00 5835.04 3323.61 8363.64 00:35:24.543 { 00:35:24.543 "results": [ 00:35:24.543 { 00:35:24.543 "job": "nvme0n1", 00:35:24.543 "core_mask": "0x2", 00:35:24.543 "workload": "randread", 00:35:24.543 "status": "finished", 00:35:24.543 "queue_depth": 128, 00:35:24.543 "io_size": 4096, 00:35:24.543 "runtime": 1.005899, 00:35:24.543 "iops": 21864.024121706057, 00:35:24.543 "mibps": 85.40634422541429, 00:35:24.543 "io_failed": 0, 00:35:24.543 "io_timeout": 0, 00:35:24.543 "avg_latency_us": 5835.043968362228, 00:35:24.543 "min_latency_us": 3323.6114285714284, 00:35:24.543 "max_latency_us": 8363.641904761906 00:35:24.543 } 00:35:24.543 ], 00:35:24.543 "core_count": 1 00:35:24.543 } 00:35:24.543 16:36:55 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:24.543 16:36:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:24.801 16:36:55 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:24.801 16:36:55 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:24.802 16:36:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:24.802 16:36:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:24.802 16:36:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:24.802 16:36:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:25.061 16:36:56 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:25.061 16:36:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:25.061 16:36:56 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:25.061 16:36:56 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:25.061 16:36:56 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:25.061 16:36:56 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:25.061 16:36:56 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:25.061 16:36:56 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:25.061 16:36:56 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:25.061 16:36:56 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:25.061 16:36:56 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:25.061 16:36:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:25.321 [2024-11-20 16:36:56.343392] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:25.321 [2024-11-20 16:36:56.344102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128ef60 (107): Transport endpoint is not connected 00:35:25.321 [2024-11-20 16:36:56.345097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128ef60 (9): Bad file descriptor 00:35:25.321 [2024-11-20 16:36:56.346098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:25.321 [2024-11-20 16:36:56.346107] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:25.321 [2024-11-20 16:36:56.346114] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:25.321 [2024-11-20 16:36:56.346123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:25.321 request: 00:35:25.321 { 00:35:25.321 "name": "nvme0", 00:35:25.321 "trtype": "tcp", 00:35:25.321 "traddr": "127.0.0.1", 00:35:25.321 "adrfam": "ipv4", 00:35:25.321 "trsvcid": "4420", 00:35:25.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:25.321 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:25.321 "prchk_reftag": false, 00:35:25.321 "prchk_guard": false, 00:35:25.321 "hdgst": false, 00:35:25.321 "ddgst": false, 00:35:25.321 "psk": ":spdk-test:key1", 00:35:25.321 "allow_unrecognized_csi": false, 00:35:25.321 "method": "bdev_nvme_attach_controller", 00:35:25.321 "req_id": 1 00:35:25.321 } 00:35:25.321 Got JSON-RPC error response 00:35:25.321 response: 00:35:25.321 { 00:35:25.321 "code": -5, 00:35:25.321 "message": "Input/output error" 00:35:25.321 } 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@33 -- # sn=804646347 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 804646347 00:35:25.321 1 links removed 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@33 -- # sn=923525748 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 923525748 00:35:25.321 1 links removed 00:35:25.321 16:36:56 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2203358 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2203358 ']' 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2203358 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2203358 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2203358' 00:35:25.321 killing process with pid 2203358 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@973 -- # kill 2203358 00:35:25.321 Received shutdown signal, test time was about 1.000000 seconds 00:35:25.321 00:35:25.321 Latency(us) 00:35:25.321 [2024-11-20T15:36:56.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.321 [2024-11-20T15:36:56.555Z] =================================================================================================================== 00:35:25.321 [2024-11-20T15:36:56.555Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:25.321 16:36:56 keyring_linux -- common/autotest_common.sh@978 -- # wait 2203358 00:35:25.580 16:36:56 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2203222 00:35:25.580 16:36:56 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2203222 ']' 00:35:25.580 16:36:56 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2203222 00:35:25.580 16:36:56 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:25.580 16:36:56 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.580 16:36:56 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2203222 00:35:25.580 16:36:56 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:25.580 16:36:56 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:25.580 16:36:56 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2203222' 00:35:25.580 killing process with pid 2203222 00:35:25.580 16:36:56 keyring_linux -- common/autotest_common.sh@973 -- # kill 2203222 00:35:25.580 16:36:56 keyring_linux -- common/autotest_common.sh@978 -- # wait 2203222 00:35:25.839 00:35:25.840 real 0m4.843s 00:35:25.840 user 0m8.787s 00:35:25.840 sys 0m1.521s 00:35:25.840 16:36:56 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:25.840 16:36:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:25.840 ************************************ 00:35:25.840 END TEST keyring_linux 00:35:25.840 ************************************ 00:35:25.840 16:36:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:25.840 16:36:56 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:25.840 16:36:56 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:25.840 16:36:56 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:25.840 16:36:56 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:25.840 16:36:56 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:25.840 16:36:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:25.840 16:36:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:25.840 16:36:56 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:25.840 16:36:56 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:25.840 16:36:56 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:25.840 16:36:56 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:25.840 16:36:56 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:25.840 16:36:56 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:25.840 16:36:56 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:25.840 16:36:56 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:25.840 16:36:56 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:25.840 16:36:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:25.840 16:36:56 -- common/autotest_common.sh@10 -- # set +x 00:35:25.840 16:36:56 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:25.840 16:36:56 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:25.840 16:36:56 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:25.840 16:36:56 -- common/autotest_common.sh@10 -- # set +x 00:35:31.117 INFO: APP EXITING 00:35:31.117 INFO: killing all VMs 00:35:31.117 INFO: killing vhost app 00:35:31.117 INFO: EXIT DONE 00:35:33.654 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:33.654 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:33.654 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:33.654 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:33.654 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:33.654 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:33.654 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:33.654 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:33.654 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:33.654 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:33.654 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:33.654 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:33.913 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:33.913 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:33.913 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:33.913 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:33.913 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:37.204 Cleaning 00:35:37.204 Removing: /var/run/dpdk/spdk0/config 00:35:37.204 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:37.204 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:37.204 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:37.204 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:37.204 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:37.204 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:37.204 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:37.204 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:37.204 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:37.204 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:37.205 Removing: /var/run/dpdk/spdk1/config 00:35:37.205 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:37.205 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:37.205 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:37.205 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:37.205 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:37.205 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:37.205 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:37.205 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:37.205 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:37.205 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:37.205 Removing: /var/run/dpdk/spdk2/config 00:35:37.205 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:37.205 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:37.205 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:37.205 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:37.205 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:37.205 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:37.205 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:37.205 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:37.205 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:37.205 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:37.205 Removing: /var/run/dpdk/spdk3/config 00:35:37.205 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:37.205 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:37.205 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:37.205 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:37.205 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:37.205 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:37.205 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:37.205 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:37.205 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:37.205 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:37.205 Removing: /var/run/dpdk/spdk4/config 00:35:37.205 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:37.205 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:37.205 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:37.205 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:37.205 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:37.205 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:37.205 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:37.205 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:37.205 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:37.205 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:37.205 Removing: /dev/shm/bdev_svc_trace.1 00:35:37.205 Removing: /dev/shm/nvmf_trace.0 00:35:37.205 Removing: /dev/shm/spdk_tgt_trace.pid1723291 00:35:37.205 Removing: /var/run/dpdk/spdk0 00:35:37.205 Removing: /var/run/dpdk/spdk1 00:35:37.205 Removing: /var/run/dpdk/spdk2 00:35:37.205 Removing: /var/run/dpdk/spdk3 00:35:37.205 Removing: /var/run/dpdk/spdk4 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1720924 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1721986 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1723291 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1723796 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1724717 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1724899 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1725875 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1725950 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1726236 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1727972 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1729475 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1729769 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1730061 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1730344 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1730663 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1730914 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1731135 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1731449 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1732198 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1735658 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1735966 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1736212 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1736234 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1736720 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1736728 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1737220 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1737230 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1737495 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1737500 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1737758 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1737990 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1738452 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1738615 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1738975 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1742803 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1747078 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1757118 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1757809 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1762310 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1762558 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1766829 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1772750 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1775564 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1786298 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1795401 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1797050 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1797981 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1815130 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1819161 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1865201 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1870540 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1876311 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1882928 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1882931 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1884192 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1884932 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1885843 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1886529 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1886542 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1886768 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1886998 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1887001 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1887913 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1888710 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1889531 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1890214 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1890216 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1890447 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1891467 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1892456 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1900765 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1929690 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1934206 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1935812 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1937643 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1937793 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1937895 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1938129 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1938630 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1940364 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1941238 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1941674 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1943834 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1944323 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1944826 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1949101 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1954533 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1954534 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1954535 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1959006 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1967357 00:35:37.205 Removing: /var/run/dpdk/spdk_pid1971183 00:35:37.206 Removing: /var/run/dpdk/spdk_pid1977176 00:35:37.206 Removing: /var/run/dpdk/spdk_pid1978382 00:35:37.206 Removing: /var/run/dpdk/spdk_pid1979800 00:35:37.206 Removing: /var/run/dpdk/spdk_pid1981239 00:35:37.206 Removing: /var/run/dpdk/spdk_pid1985816 00:35:37.206 Removing: /var/run/dpdk/spdk_pid1990157 00:35:37.465 Removing: /var/run/dpdk/spdk_pid1994184 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2001856 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2001989 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2006517 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2006747 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2007051 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2007490 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2007560 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2012444 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2013015 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2017356 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2020100 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2025372 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2030836 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2039427 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2046490 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2046543 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2065934 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2066411 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2067093 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2067574 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2068307 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2068783 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2069264 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2069948 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2073984 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2074225 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2080299 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2080507 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2085824 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2090057 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2099921 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2100991 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2105250 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2105501 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2109744 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2115374 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2117954 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2127896 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2136569 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2138178 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2139098 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2155743 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2159633 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2162471 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2170267 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2170405 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2175487 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2177450 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2179348 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2180460 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2182436 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2183713 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2192589 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2193433 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2193892 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2196380 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2196848 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2197312 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2201141 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2201148 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2202664 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2203222 00:35:37.465 Removing: /var/run/dpdk/spdk_pid2203358 00:35:37.465 Clean 00:35:37.738 16:37:08 -- common/autotest_common.sh@1453 -- # return 0 00:35:37.738 16:37:08 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:37.738 16:37:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:37.738 16:37:08 -- common/autotest_common.sh@10 -- # set +x 00:35:37.738 16:37:08 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:37.738 16:37:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:37.738 16:37:08 -- common/autotest_common.sh@10 -- # set +x 00:35:37.738 16:37:08 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:37.738 16:37:08 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:37.738 16:37:08 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:37.738 16:37:08 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:37.738 16:37:08 -- spdk/autotest.sh@398 -- # hostname 00:35:37.738 16:37:08 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:38.005 geninfo: WARNING: invalid characters removed from testname! 00:35:59.943 16:37:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:01.323 16:37:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:03.230 16:37:34 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:05.138 16:37:35 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:06.514 16:37:37 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:08.419 16:37:39 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:10.325 16:37:41 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:10.325 16:37:41 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:10.325 16:37:41 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:10.325 16:37:41 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:10.325 16:37:41 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:10.325 16:37:41 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:10.325 + [[ -n 1643580 ]] 00:36:10.325 + sudo kill 1643580 00:36:10.334 [Pipeline] } 00:36:10.350 [Pipeline] // stage 00:36:10.356 [Pipeline] } 00:36:10.371 [Pipeline] // timeout 00:36:10.376 [Pipeline] } 00:36:10.391 [Pipeline] // catchError 00:36:10.396 [Pipeline] } 00:36:10.410 [Pipeline] // wrap 00:36:10.417 [Pipeline] } 00:36:10.430 [Pipeline] // catchError 00:36:10.440 [Pipeline] stage 00:36:10.442 [Pipeline] { (Epilogue) 00:36:10.456 [Pipeline] catchError 00:36:10.458 [Pipeline] { 00:36:10.472 [Pipeline] echo 00:36:10.474 Cleanup processes 00:36:10.481 [Pipeline] sh 00:36:10.765 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:10.765 2213950 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:10.779 [Pipeline] sh 00:36:11.067 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:11.067 ++ grep -v 'sudo pgrep' 00:36:11.067 ++ awk '{print $1}' 00:36:11.067 + sudo kill -9 00:36:11.067 + true 00:36:11.079 [Pipeline] sh 00:36:11.364 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:23.592 [Pipeline] sh 00:36:23.879 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:23.879 Artifacts sizes are good 00:36:23.893 [Pipeline] archiveArtifacts 00:36:23.900 Archiving artifacts 00:36:24.053 [Pipeline] sh 00:36:24.420 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:24.431 [Pipeline] cleanWs 00:36:24.440 [WS-CLEANUP] Deleting project workspace... 00:36:24.440 [WS-CLEANUP] Deferred wipeout is used... 00:36:24.446 [WS-CLEANUP] done 00:36:24.448 [Pipeline] } 00:36:24.464 [Pipeline] // catchError 00:36:24.474 [Pipeline] sh 00:36:24.753 + logger -p user.info -t JENKINS-CI 00:36:24.762 [Pipeline] } 00:36:24.774 [Pipeline] // stage 00:36:24.780 [Pipeline] } 00:36:24.793 [Pipeline] // node 00:36:24.797 [Pipeline] End of Pipeline 00:36:24.821 Finished: SUCCESS